TheRegister

Subscribe to TheRegister feed
Articles from www.theregister.com
Updated: 1 hour 10 min ago

NASA's Psyche mission set for a brief encounter with Mars

2 hours 11 min ago
More than two years after launch, NASA's Psyche mission will whizz past Mars on May 15, using the planet's gravity to tweak its trajectory and accelerate on to its asteroid destination. The spacecraft, which was launched on October 13, 2023, will pass just 2,800 miles (4,500 kilometers) above the surface of the red planet at 12,333 mph (19,848 kph) on its way to the metal-rich asteroid, Psyche. In February, the spacecraft's thrusters were fired for 12 hours to refine its approach to Mars. That refinement played its part in today's flyby. However, it won't be until a Doppler shift is recorded in the signals from the spacecraft as it passes Mars that scientists will be able to definitively confirm its new speed and trajectory. These techniques are not new. Gravity assist maneuvers have been a thing since the dawn of the space age, and were theorized long before. One of the most famous beneficiaries is the Voyager mission, which took advantage of a rare planetary alignment to undertake a "Grand Tour" of Jupiter, Saturn, Uranus, and Neptune. The trajectory allowed significant propellant to be saved. And, of course, the use of gravity assists highlights the work undertaken by boffins in trajectory planning to calculate exactly how a spacecraft should be launched and what corrections are needed to achieve the required precision. Psyche is due to reach its destination in 2029, and the Mars flyby will allow scientists to check out the spacecraft's payload. For example, the multispectral imager will capture thousands of observations of Mars. According to NASA, Sarah Bairstow, Psyche's mission planning lead at the Jet Propulsion Laboratory in Southern California, said: "This is our first opportunity in flight to calibrate Psyche's imager with something bigger than a few pixels, and we’ll also make observations with the mission's other science instruments." A bit of bonus science is always welcome, as well as a rehearsal for the main event, when Psyche reaches its destination. "Ultimately, though, the only reason for this flyby is to get a little help from Mars to speed us up and tilt our trajectory in the direction of the asteroid Psyche," said Lindy Elkins-Tanton, principal investigator for Psyche at the University of California, Berkeley. "But if all our instruments are powered up, and we can do important testing and calibration of the science instruments, that would be the icing on the cake." ®
Categories: Linux fréttir

Anthropic urges Uncle Sam to kneecap China's AI ambitions before 2028

2 hours 47 min ago
AI monger Anthropic wants America and its allies to tighten measures aimed at curbing China's AI progress, warning of the consequences if "authoritarian governments" take the lead rather than Uncle Sam. In a lengthy missive posted on its website, the San Francisco-based org says it expects AI to deliver "transformational economic and societal impacts" in the coming years, and whether the transition goes well depends on where the most capable systems are built first. Since the technology is advancing swiftly, democratic countries have only a limited time in which to act, Anthropic believes. The measures it wants to see are nothing new: enforcing tighter export controls on chips used for AI development, such as Nvidia's GPUs, and cutting off access to American AI models. Recent history suggests these controls "have been incredibly successful," it says. But if Chinese researchers are only several months behind the US in AI capabilities, as many experts estimate, how successful can those efforts have been? AI labs in China have only built models that come close to those in America because of their talent and their knack for exploiting loopholes to get around export controls, Anthropic claims, along with distillation attacks that "illicitly extract the innovations of American companies." Many will suspect this is Anthropic's chief motivation in calling for action against China. Back in February, the Claude model maker accused China-based rivals including DeepSeek of using distillation to train their models by siphoning knowledge from Anthropic's own. As The Register pointed out at the time, accusing China of copying, while using content created by others to train your own models, shows a staggering lack of self-awareness from the AI industry. Anthropic's sermon also shows blinkered thinking. It implies that China can only advance by riding on America's coattails, and is incapable of innovating. This is despite the shockwaves generated by the release of the DeepSeek R1 model early in 2025, believed to be on a par with the best US models. Numerous reports also indicate that Chinese organizations have made huge strides with domestically developed AI silicon, and Beijing even tried to discourage tech companies in the country from buying and using Nvidia chips. Anthropic sets out two scenarios for what the world could look like in 2028, a date when it expects "transformative AI systems" to have emerged. In the first scenario, America has "successfully defended its compute advantage," and "democracies set the rules and norms around AI." The second has China overtaking the US, leading to AI norms and rules being shaped by authoritarian regimes, with the best models enabling "automated repression at scale." Another problem with Anthropic's plan is that many countries, especially in Europe, view both American and Chinese AI supremacy as a threat to democracy. There is a concerted push in Europe for "digital sovereignty" to minimize reliance on US technology, for example. Others warn it could erode democracy in America itself. Anthropic can draw little comfort from the Trump administration, which has a constantly shifting attitude to China. Export controls were said not to be high on the agenda during the President's trip to Beijing this week, and it was reported that the US has now cleared around 10 Chinese firms to buy Nvidia's second-most powerful AI chip, the H200. ®
Categories: Linux fréttir

Exploited Exchange Server flaw turns OWA inboxes into script launchpads

4 hours 28 min ago
Microsoft has confirmed a vulnerability in on-premises Exchange Server that could result in surprise script execution in victims' browsers. Tracked as CVE-2026-42897, the flaw affects Outlook Web Access (OWA) and can be triggered by a specially crafted email opened in OWA, assuming "certain interaction conditions are met." The prize for attackers is arbitrary JavaScript execution in the mark's browser context. The advisory describes the flaw as a spoofing vulnerability stemming from cross-site scripting, which will set alarm bells ringing for administrators, and it appears the vulnerability is being exploited. The bug was assigned a CVSS score of 8.1. Exchange Server 2016, 2019, and the latest version, Exchange Server Subscription Edition (SE), are all affected regardless of their update level. A mitigation has been released via the Exchange Emergency Mitigation (EM) Service. However, Microsoft warned the mitigation might break other things – inline images might stop working in the recipient's OWA reading pane (use attachments instead) and the OWA Print Calendar functionality might not work (use a screenshot or the Outlook Desktop client). Finally, OWA Light might not work properly. Microsoft deprecated this in 2024, so affected users should consider an upgrade. The mitigation can also be applied manually in scenarios where customers are not using the EM service. These might be disconnected or air-gapped environments – exactly the sort of environments where on-premises Exchange tends to linger. Microsoft is working on a full security update, although only the Exchange SE version will be publicly available. Exchange 2016 and 2019 customers will receive it only if enrolled in Period 2 of the Exchange Server Extended Security Updates (ESU) program. The second period of Exchange Server ESU kicked off this month, with Microsoft sternly warning that there would be no extensions past its end. The vulnerability does not affect Exchange Online. Microsoft has not given any details on how the exploit works, nor how widely it is being exploited. ®
Categories: Linux fréttir

Patch time for Cisco SD-WAN admins as vendor drops yet another make-me-admin zero-day

5 hours 5 min ago
Cisco admins face emergency patch duty after Switchzilla disclosed a max-severity make-me-admin bug affecting Catalyst SD-WAN Controller and Manager. Switchzilla dropped an advisory for CVE-2026-20182 (10.0) on Thursday, saying that both components, formerly known as vSmart and vManage, were vulnerable in all deployment types, and that fixes were available. The bug allows unauthenticated remote attackers to bypass authentication and gain admin privileges on an affected system. According to Rapid7, whose researchers Stephen Fewer and Jonah Burgess found the vulnerability, attackers exploiting CVE-2026-20182 could then start issuing arbitrary NETCONF commands. It means they could steal data, intercept traffic, manipulate an organization's firewall rules, or just bring the network down, opening up opportunities for attackers of all stripes: state-backed, financially motivated, hacktivists – you name it. Offering a high-level overview of the vulnerability, Cisco said: "This vulnerability exists because the peering authentication mechanism in an affected system is not working properly. An attacker could exploit this vulnerability by sending crafted requests to the affected system. "A successful exploit could allow the attacker to log in to an affected Cisco Catalyst SD-WAN Controller as an internal, high-privileged, non-root user account. Using this account, the attacker could access NETCONF, which would then allow the attacker to manipulate network configuration for the SD-WAN fabric." Cisco confirmed that, in May 2026, it became aware that CVE-2026-20182 had been exploited as a zero-day, although it did not attribute the activity. The Cybersecurity and Infrastructure Security Agency (CISA) also added CVE-2026-20182 to its Known Exploited Vulnerabilities (KEV) catalog, which is reserved for the security flaws that are both actively being exploited and threaten federal agencies. The US cyber agency gave Federal Civilian Executive Branch agencies just three days to apply Cisco's patches. While CISA has set similarly short deadlines before, they are rare and typically reserved for vulnerabilities deemed especially urgent. There was no word of the bug being exploited in ransomware attacks. Cisco said in its advisory there are no workarounds available, and it "strongly recommends" applying the available fixes. Any admin responsible for their org's Cisco SD-WAN system should hunt through their logs, Cisco said, and be aware that indicators of compromise may appear among otherwise normal-looking operational logs. Specifically, they should be auditing the auth.log file at /var/log/auth.log for entries related to Accepted publickey for vmanage-admin from unknown or unauthorized IP addresses. Then, check those IP addresses against the configured System IPs that are listed in the Cisco Catalyst SD-WAN Manager web UI, the vendor said. Cisco thanked the Rapid7 researchers, who first reported the vulnerability in early March after looking into a separate authentication bypass zero-day in Cisco Catalyst SD-WAN Controller (CVE-2026-20127, 10.0) from February. ®
Categories: Linux fréttir

X tells Ofcom it will finally check its moderation inbox

5 hours 28 min ago
Britain's media regulator has extracted a set of promises from X over illegal hate speech and terrorist content, suggesting that even "free speech absolutism" eventually meets a compliance department. Under commitments accepted by Ofcom, X said it will review and assess reports of suspected illegal terrorist and hate content from UK users within an average of 24 hours, with at least 85 percent handled within 48 hours through its dedicated UK reporting channel. The company also committed to engaging with external experts on how its reporting systems work, following several organizations' complaints that they were unclear whether reports submitted to X were even being received, let alone acted on. X also said it would withhold access in the UK to accounts operated by or on behalf of terrorist organizations proscribed in Britain if the accounts are reported for posting illegal terrorist content. Ofcom said X will now submit quarterly performance data over a 12-month period so the regulator can monitor whether the company is actually sticking to those promises. "Following intensive engagement carried out by Ofcom's online safety team, X have committed to implementing stronger protections for UK users, which we will now monitor closely," said Oliver Griffiths, Ofcom's Online Safety Group Director. "We have evidence that terrorist content and illegal hate speech is persisting on some of the largest social media sites. We are challenging them to tackle the problem and expect them to take firm action." The regulator launched a compliance investigation in December to examine whether major social media platforms have adequate systems to address illegal hate and terrorist material. Ofcom said evidence gathered alongside organizations including Tech Against Terrorism, Tell MAMA, and the Antisemitism Policy Trust pointed to illegal hate and terror content remaining visible across some of the internet's largest platforms. Ofcom said the issue was of "particular concern" following several recent antisemitic incidents and attacks on Jewish sites in Britain, including attacks in Manchester, Golders Green, and recent arson attempts in London. The watchdog also made clear this is not the end of its scrutiny of X, reminding the platform that Ofcom's separate investigation including issues related to Grok is ongoing and that it will continue to probe X's broader illegal content compliance systems. ®
Categories: Linux fréttir

ZTE showcases at GSMA M360 LATAM 2026, driving future business model restructuring - AI & network two-way integration

5 hours 53 min ago
Partner Content ZTE Corporation (0763.HK / 000063.SZ), a global leading provider of integrated information and communication technology solutions, participated in GSMA M360 LATAM 2026. Ms. Chen Zhiping, Chief International Ecosystem Representative of ZTE, delivered a keynote speech entitled "Driving Future Business Model Restructuring — AI & Network Two-Way Integration" at the conference. Ms. Chen provided an in-depth analysis of the industrial value of the two-way integration of AI and networks, sharing ZTE's achievements in the Latin American market over the past two decades, its AI-Native network innovation practices, and its full-scenario intelligent solutions, helping Latin American operators complete their strategic upgrade from "connectivity providers" to "digital economy enablers". Facing the AI industry wave, ZTE released its global strategic vision in 2025: "All in AI, AI for All, Becoming a Leader in Connectivity and Intelligent Computing". Ms. Chen stated that this strategy is highly aligned with the core concepts of this GSMA Summit. In the future, ZTE will move beyond traditional network connectivity services, continuously upgrade its basic network capabilities, and comprehensively expand its AI and intelligent computing business layout. Through a two-way integration model of AI empowering the network and the network supporting AI, ZTE will reconstruct a new business model adapted to the AI era and activate new growth momentum for the Latin American digital economy. In terms of AI-enabled network upgrades, ZTE has pioneered the AI-Native network concept, deeply embedding AI capabilities into all network layers and processes to maximize network efficiency and optimize costs. In the wireless network field, ZTE's new 5G BBU integrates native intelligent computing capabilities, effectively improving the overall efficiency of hardware and software resources and increasing cell throughput by 20%. Simultaneously, by combining Super-N high-performance power amplifiers and AI intelligent optimization technology, equipment energy consumption is reduced by 38%. Currently, AAU and RRU products equipped with this technology have been deployed on a large scale in several Latin American countries, including Chile, Ecuador, Bolivia, Brazil, and Peru, with over 37,000 units deployed to date, saving local operators millions of dollars in electricity costs annually and achieving efficient, green, and intelligent network upgrades. Built upon AI-Native technology, the AIR Net advanced intelligent network solution enables commercial deployment of "autonomous driving" for networks, comprehensively revolutionizing operator operation and maintenance models and reducing overall TCO. This solution has already been commercially deployed in multiple locations globally. Currently, ZTE's intelligent network capabilities have obtained authoritative L4-level certification from the TM Forum, and its self-developed Co-Claw enterprise-level intelligent agent has been fully implemented internally, continuously improving network automation and intelligence levels and helping operators move towards advanced intelligent networks. In response to the complex and diverse network environment in Latin America, ZTE continues to implement scenario-based coverage solutions to bridge the regional digital divide. In indoor scenarios, ZTE has partnered with Chilean company Millicom to deploy the Qcell solution, achieving stable gigabit coverage throughout buildings. In remote rural scenarios, ZTE collaborates with Brazilian company Claro to implement the RuralPilot simplified rural network solution, addressing network coverage challenges in the vast Amazon region with its low cost and ease of maintenance. ZTE also offers a wide range of home coverage solutions, precisely matching the networking needs of different regions and scenarios in Latin America. Ms. Chen Zhiping stated that ZTE will continue to be rooted in the Latin American market, deepen the two-way integration and innovation of AI and networks, and continue to implement green, efficient, and intelligent full-stack ICT solutions to help local operators complete their strategic transformation, upgrade from traditional connectivity service providers to digital economy enablers, comprehensively meet the intelligent needs of industries and families in all scenarios, and work together to build a smart, inclusive, and sustainable new digital ecosystem in Latin America. Contributed by ZTE.
Categories: Linux fréttir

OpenAI caught in TanStack npm supply chain chaos after employee devices compromised

6 hours 12 min ago
OpenAI says attackers behind the TanStack npm supply chain compromise stole internal credentials after reaching two employee devices, forcing the company to rotate signing certificates for several desktop products. The company disclosed this week that it had been caught up in the wider "Mini Shai-Hulud" campaign targeting npm ecosystems and developer infrastructure, though it said there was no evidence that customer data, production systems, or deployed software were compromised. OpenAI said the incident happened during a phased rollout of new supply chain security controls introduced after a previous Axios-related incident. According to the company, the two compromised employee devices had not yet received updated package management protections that would have blocked the malicious dependency. The attackers carried out "credential-focused exfiltration activity" against a limited set of internal repositories reachable from the affected employee machines, according to OpenAI. It said "only limited credential material was successfully exfiltrated from these code repositories." That was apparently enough to trigger a precautionary reset across multiple products. OpenAI is rotating the certificates used to sign macOS versions of ChatGPT Desktop, Codex App, Codex CLI, and Atlas, and is requiring users to update the affected software by June 12. The incident ties OpenAI to the increasingly messy supply chain campaign that has spent the past several weeks worming through npm ecosystems, CI/CD infrastructure, and GitHub Actions workflows. Security firm Socket linked the TanStack compromise to the broader "Mini Shai-Hulud" operation, which abused poisoned automation workflows and stolen publishing credentials to push malicious package updates into trusted software pipelines. Researchers tracking the wider Mini Shai-Hulud campaign have connected the activity to a threat group known as TeamPCP, which appears to have developed an unhealthy interest in poisoning npm ecosystems and rifling through developer credentials. TanStack confirmed this week that 84 malicious package versions spanning 42 @tanstack/* packages had been published after attackers compromised parts of its release infrastructure. The poisoned packages were designed largely to steal credentials, including GitHub tokens, cloud secrets, npm credentials, and CI/CD authentication material. The campaign appears linked to earlier Mini Shai-Hulud attacks involving SAP-related npm packages, suggesting the same credential-stealing operation is spreading across multiple developer ecosystems. OpenAI said it is continuing to investigate the incident and monitor for any downstream abuse tied to the stolen credentials. The reassuring news is that OpenAI says no production systems were breached. The less reassuring news is that attackers keep getting deeper into the software assembly line before anybody notices. ®
Categories: Linux fréttir

Fusion for the future: XLSMART and ZTE partnering for a boundless digital Indonesia

6 hours 21 min ago
Partner Content In Indonesia, the magic of “Bumbu”, that perfect spice blend, creates unforgettable flavors. Today, in the digital world, an even grander "fusion" is taking place. Facing the challenge of unifying seperate networks across Indonesia's diverse geography, XLSMART partnered with ZTE on a landmark dual-network convergence project, integrating over 20,000 4G base stations and deploying more than 7,000 new 5G sites in just eight months. The initiative has launched the country's first nationwide 5G blanket coverage network, validated by Ookla as the fastest 5G network in H2 2025. Leveraging digital-intelligent tools and ecosystem collaboration, the project significantly enhanced coverage, capacity, and user experience for 73 million subscribers — turning complex delivery challenges into measurable gains in speed and efficiency. Fusion for the Future. Watch how the converged network is powering Indonesia's digital growth. Contributed by ZTE.
Categories: Linux fréttir

UK reloads artillery plans with £1B remote-control howitzer order

6 hours 35 min ago
The British Army is to get 72 next-gen mobile artillery units, in the shape of a remote-controlled howitzer (RCH) module that mounts onto the Boxer armored vehicle already in service. The Ministry of Defence (MoD) announced a £1 billion ($1.35 billion) contract to provide the Army with a modern mobile system capable of providing artillery support against targets up to 70 km (44 miles) away. First deliveries of the RCH 155 units are expected in 2028, with a "minimum deployable capability" expected before the end of the decade. It follows a £52 million early capability demonstrator contract signed in December 2025. The RCH 155 is basically a 155 mm gun housed in a turreted artillery module mounted on the Boxer drive module. It is an auto-loading weapon, capable of firing eight rounds per minute. The unit features a fire control computer with integrated ballistics calculation, plus radio data transmission to a remote artillery control system. Boxer is an eight-wheeled (8x8), all-terrain vehicle designed to take a number of different bolt-on mission modules allowing it to fulfill various roles. The British Army has initially chosen just a few of these types, primarily the troop carrier variant, but also the ambulance module and command vehicle unit. According to the MoD, the barrel, breech, recoil system, and trunnions will be manufactured by German defense biz Rheinmetall at its large-caliber production facility in Telford, using British steel supplied by Sheffield Forgemasters. The Boxer drive modules/chassis, engine, and drivetrain that the weapon system sits on will be manufactured by the UK division of pan-European defense firm KNDS in Stockport. The Army is to receive a total of 623 of these. A new mobile artillery platform was needed to replace the UK's aging fleet of AS-90 self-propelled howitzers. These could easily be mistaken for a tank, thanks to their tracked chassis and turret-mounted gun. The last of these were donated to Ukraine over the past few years to help it fight Russia. The UK also procured a small number (14) of Archer mobile artillery systems as a stop-gap while a successor for AS-90 was selected. This is an automated 155 mm gun mounted on a 6x6 articulated truck chassis. "This major investment is defence delivering for the battlefield and for Britain's economy," said Defence Secretary John Healey MP. "By securing next-generation artillery with Germany, not only are we rearming to strengthen NATO against growing Russian aggression but also creating highly skilled jobs here in Britain." Ironically, Britain was one of the earliest partners in the Boxer joint venture, but withdrew from it in 2003 to focus on a different program, the Future Rapid Effect System (FRES). One strand of FRES eventually led to what is now known as the Ajax family of armored vehicles. You may have heard of it. The UK government announced it was rejoining the Boxer program in 2018 in order to meet its Mechanized Infantry Vehicle (MIV) requirement. ®
Categories: Linux fréttir

Britain's latest civil servant is a chatbot trained on GOV.UK misery

7 hours 5 min ago
After years of turning public services into a maze of dead links, phone queues, and eligibility calculators, the UK government has unveiled the inevitable next step: an AI chatbot. The UK government on Friday announced the launch of "GOV.UK Chat," a generative AI assistant bolted into the GOV.UK app and trained on tens of thousands of pages of official guidance that Whitehall is boldly pitching as the "most comprehensive government-built chat tool in the world." Ministers say the system will help people navigate everything from maternity pay and retirement benefits to driving licenses and startup grants without having to dig through the bureaucratic swamp that is modern Britain. According to the government, some public sector call centers handle around 100,000 calls a day, which helps explain why ministers are suddenly very enthusiastic about citizens talking to software instead. Technology Secretary Liz Kendall said people fed up with being stuck on hold should not have to spend hours wading through online guidance either, which sounds suspiciously like somebody inside government has finally used GOV.UK. "For too long, navigating government has felt like a full-time job," she said. "Whether you're a parent trying to find out what childcare you're entitled to, a first-time buyer working out which schemes you can access, or someone approaching retirement, you shouldn't have to spend time trawling through hundreds of web pages to get a straight answer." The rollout comes just months after polling showed plenty of Brits are already uneasy about AI spreading through public services. Concerns ranged from privacy and job losses to fears that dealing with the government will eventually mean getting stuck in an automated support maze when something important goes wrong. The government said human support will still be available alongside the chatbot, at least for the time being. Ministers are keen to stress that GOV.UK Chat is not deciding who gets benefits or owes tax. Right now, the system mostly pulls together existing guidance, calculators, and links from across GOV.UK rather than making decisions itself. Given Whitehall's uneven history with large technology projects, that's probably a wise decision. Still, it is not hard to see where this is heading. Today, the chatbot helps you find childcare support. A few years from now, it will probably be explaining why an algorithm flagged your wheelie bin for suspicious behavior. ®
Categories: Linux fréttir

MPs want social media treated more like unsafe toys than harmless apps

7 hours 47 min ago
British MPs are urging the government to tighten online safety laws, arguing social media companies should face the same kind of scrutiny as other products linked to serious harm. In a letter to Liz Kendall and Kanishka Narayan, shared with The Register, the UK's Science, Innovation and Technology Committee said there is now "strong and consistent evidence" linking social media use to harms affecting young people and warned that "no action is not an option." The committee, chaired by Chi Onwurah, said the current system leaves social media companies free to grow their youth user bases while avoiding meaningful responsibility for the subsequent fallout. "The status quo, where social media companies are neither accountable nor responsible for preventing harms, isn't acceptable," Onwurah said. "If any other consumer product caused these harms, it would've been recalled or changed." The intervention forms part of the government's "Growing up in the online world" consultation and follows a March evidence session examining arguments for and against restricting social media access for under-16s. The committee said it heard evidence from clinicians, bereaved parents, academics, child safety groups, and experts studying Australia's social media age limits, as well as accounts from young people and families concerned about harmful content and the effect social media is having on children's wellbeing. While the MPs stopped short of explicitly endorsing a blanket social media ban for teenagers, the letter makes clear the committee thinks ministers have spent too long relying on voluntary action from platforms whose business models still reward engagement above pretty much everything else. The committee said existing age restrictions should be properly enforced using "effective and privacy-preserving" age verification systems – rather than checks that can be bypassed by a drawn-on mustache – and called for stronger legal obligations requiring companies to filter illegal content and to block children from viewing harmful material. The letter also revisits the committee's earlier concerns about recommendation algorithms and how platforms deal with harmful and illegal posts, areas where MPs say previous proposals for reform went nowhere. MPs are now urging ministers to revisit those recommendations and bring forward fresh online safety legislation in the next parliamentary session. Particular attention was paid to algorithms and addictive design features. The committee argued that infinite scrolling and similar engagement mechanics should be designed out of platforms entirely, and warned that social media companies cannot keep pretending they are passive hosts while their recommendation systems actively shape what users see. The letter also warned that gaps in the UK's Online Safety Act mean some AI chatbots operating on closed databases currently fall outside the regime, something MPs said must be fixed before the next generation of online platforms disappears into yet another regulatory blind spot. ®
Categories: Linux fréttir

On-call techie decided job was done and hit the bottle – just before his pager went off

9 hours 50 min ago
ON CALL Welcome to another installment of On Call, The Register's weekly reader-contributed column that celebrates the IT professionals who put their lives on pause to provide tech support at all hours. This week, meet a reader we'll Regomize as "Jemaine." In the early 1990s, he found himself in Hong Kong working as a database specialist on VAX/VMS systems. "We'd built a billing application for a telco client in Macau, and it had been running happily for some time," he told On Call. By the time the system needed its first major OS upgrade, Jemaine was therefore happy for the local crew to handle the job. His client had other ideas and, despite also arranging for two DBAs to be present during the upgrade, insisted he show up. This was not a hardship because the job coincided with the Macau Grand Prix and Jemaine wasn't required to be on site. The client had therefore provided him with a hotel room that, as luck would have it, had a view of the track! "A couple of friends ended up crashing my room, and we spent the weekend watching insane drivers hurl cars around an absurdly tight street circuit," Jemaine admitted. The client never called or paged, so after the race Jemaine was confident the upgrade was going well. He and his friends therefore consumed "several bottles of rich Portuguese red wine" and ordered a sumptuous meal. "Dessert had just arrived when my pager went off," he told On Call. Jemaine poured himself into a cab to his client's office and found a situation he described as "vague but clearly serious" because the billing application wouldn't start. "Judging by the silence and the stoic expressions, everyone was quietly panicking," Jemaine wrote. He soon learned that the client had already tried to fix the app by reinstalling the OS twice and had now decided the database was the source of the problem. Jemaine was told to wait while the DBAs reinstalled the database, which "gave me time to sit in a back room and sober up slightly," he admitted to On Call. The database rebuild finished at about 2 am, but the application still refused to start. The client then turned to Jemaine. "I was summoned and interrogated by the systems team," he said, and ran a quick check that showed the database was perfectly healthy – but the batch scheduler wasn't running. To probe that problem, Jemaine asked to speak with the lead developer – who, it turned out, was not on site. "An urgent page was sent, and fortunately he called back quickly. His suggestion was to step through the code. This meant compiling a large COBOL program I'd never seen before in DEBUG mode, then single-stepping through it over the phone with the developer." By now, an increasingly anxious semicircle of client staff was watching Jemaine's every move, and he felt like they were silently shifting blame in his direction. "At around 4 am, we found the failure point: batch queue submission. The call was returning a null error code. The developer was baffled." "I reached for the physical manual to see what the function actually did," Jemaine wrote. "And then, for reasons I still credit to the Portuguese wine gods, I asked a simple question: 'What account did you test this under?'" The developer immediately replied: "Administrator." Jemaine asked the OS upgrade team to run the application with administrator privileges, and it immediately worked. "The OS upgrade had introduced a new permission requirement for submitting jobs to the batch queue," Jemaine told On Call. So this was very much not his problem, and he was able to excuse himself and stagger home as the Sun started to rise. "Nobody from the company ever mentioned the incident to me again," he told On Call. "And I can't remember the name of the wine we were drinking." Have you been on call, decided nothing could possibly go wrong, and then been caught out? If so, click here to send On Call an email so we can tell your story on a future Friday. ®
Categories: Linux fréttir

AWS racks M3 Ultra Macs that boast specs you can’t currently buy

10 hours 40 min ago
Amazon Web Services has done something many others can’t achieve: Buy a bunch of Apple’s Mac Studio computers. Mac Studio is Apple’s workstation-grade machine and has been hard to find in recent weeks as Cupertino struggles to find enough RAM to fill them, and AI enthusiasts snap up stock to run tools like OpenClaw. At the time of writing, Apple advises buyers they’ll need to wait nine or ten weeks for a Mac Studio to arrive. The cloudy Macs AWS has racked and stacked pack Apple’s M3 Ultra SoC, Cupertino’s most powerful chip. Apple currently sells the Mac Studio with up to 96GB of RAM. AWS on Thursday started offering a cloudy M3 Ultra with 256GB of unified memory, a configuration The Register did not see as an option on Apple.com while preparing this article. The cloudy M3 Ultra machines run on actual Mac Studios packing a 28-core CPU, 60-core GPU, and 32-core Neural Engine. At the time of writing, AWS hadn’t updated its list of EC2 instance types to include the new M3 instances, so we can’t tell you what they’ll cost or if the cloud giant has departed from its past practice of renting bare metal machines rather than macOS VMs. Apple allows users to create and run macOS virtual machines, but only on Apple hardware and allows just two VMs per host. Cupertino also restricts use of VMs to four purposes: software development; testing during software development; using macOS Server; and personal, non-commercial use. AWS recommends its cloudy Macs as an ideal platform to build and test apps for all of Apple’s operating systems – even the visionOS that powers its unloved Vision Pro VR goggles. Amazon’s M3 Ultra Mac Studios only made it into two regions – US East and US West (Oregon) – so users elsewhere who fancy a cloudy Mac but need lower latency will have to endure the very on-prem experience of waiting for hardware to show up. ®
Categories: Linux fréttir

Possible Samsung strike puts even more pressure on memory pricing

13 hours 36 min ago
RAM prices have risen after negotiations between Samsung and a union representing many of its workers collapsed – and the union has now called for a lengthy strike to start next week. The National Samsung Electronics Union (NSEU) has noticed the extraordinary profits the Korean giant is making thanks to the high price of RAM, and wants the company to boost members’ pay with bonuses tied to profits. Talks on that idea have stalled, and pointing out that Samsung pays memory-makers less than peers at SK Hynix hasn’t found a receptive ear. The Union therefore plans to start an 18-day strike next week. If the industrial action goes ahead, it has the potential to disrupt memory production, which would mean further shortages at a time DRAM is already expensive and hard to acquire due to rampant demand for AI infrastructure. Short-term memory prices have therefore spiked in the last 72 hours – which ironically will just increase Samsung’s profit’s even more! The Union has accused Samsung of not taking its arguments seriously, and South Korea’s government has stepped in with attempts to bring two parties to the table for fresh talks that lawmakers hope will resolve the situation because The Spice Must Flow. Or maybe The RAM Must Roll. Samsung recently posted almost $40 billion profit for a single quarter, thanks largely to memory sales. That enormous sum, and others like it reported by Korean companies who sell memory and other products in demand from AI builders, caught the attention of Yong-Beom Kim, South Korea’s Chief Presidential Secretary for Policy – a ministerial role. Using his personal Facebook page, Kim suggested funneling a portion of AI profits into a “national dividend fund” that can be used to improve South Korea’s long-term prospects. His post mentions Norway’s sovereign wealth fund, which famously siphoned off revenue from oil sales and invested it in shares to create assets worth over $2 trillion. Vendors often tell The Register “data is the new oil” so maybe Kim is on to something – although the metaphor may not work well when one considers current events in the Strait of Hormuz and their effect on the world. ®
Categories: Linux fréttir

Cerebras risked it all on dinner plate-sized AI accelerators a decade ago. Today it’s worth $66 billion

Thu, 2026-05-14 23:02
Cerebras Systems has done what many chip startups aspire to but few ever achieve. On Thursday, the company and long-time Nvidia rival raised $5.55 billion in an initial public offering (IPO), making the company worth more than $66 billion on its first day of trading. The milestone didn’t happen overnight. It took more than a decade, a radically different approach to chipmaking, and two separate attempts at an IPO to pull off. Founded in 2015 by former SeaMicro head Andrew Feldman, Cerebras Systems' first chips looked nothing like GPUs or AI accelerators of the time. The bet that put Cerebras on the map At the time, most high-end GPUs used dies measuring roughly 800 square mm that’d been cut from a larger wafer. Eight or more of these GPUs would typically be stitched together by high-speed interconnects, like NVLink, which allowed them to pool their resources and behave like one big accelerator. Rather than cutting up a wafer into smaller chips just to reconnect them again, Cerebras figured why not etch all that compute into a wafer-sized chip? And so the Wafer-Scale Engine (WSE), a giant chip measuring 46,225 square mm — about the size of a dinner plate — was born. Cerebras' first chips weren’t just bigger; they were purpose-built for AI training and sported a novel compute engine designed to speed up the highly sparse matrix multiply-accumulate operations common in deep learning. This hardware sparsity took advantage of the fact that large portions of a neural network’s parameters ultimately end up being zeros, allowing Cerebras to boost the effective computational output of its first-gen WSE accelerators from 2.65 16-bit petaFLOPS to 26.5. Nvidia added support for sparsity in its Ampere generation a year later, but it only worked for a specific ratio (2:4), limiting its effectiveness to select use cases. To train a model, up to 16 of these chips could be ganged together over a high-speed interconnect. This was kind of important too, because unlike GPUs, which stored model weights in HBM or GDDR memory, Cerebras' chips were almost entirely reliant on on-chip SRAM. Although SRAM is insanely fast, which is why it’s used for caches in basically every modern processor, it’s not particularly space efficient. While Cerebras' first wafer-scale accelerator could theoretically reach 9 petabytes per second of memory bandwidth, it was limited to just 18 GB of capacity at a time when Nvidia was already at 32 GB per GPU and about to make the leap to 40 GB or even 80 GB per chip. Still, the approach was performant enough that for its second-generation wafer-scale accelerator, launched in 2021, Cerebras doubled down on the architecture. While the WSE-2 wasn’t physically larger, the move to TSMC’s 7nm process tech allowed the company to more than double the transistor count, compute density, SRAM capacity, and bandwidth. The chips also supported larger clusters, scaling up to 192, though in practice these clusters were usually smaller at between 16 and 32 systems per site. It was also around this time that Cerebras caught the attention of United Arab Emirates-based cloud provider G42, which quickly became its largest financier. By mid-2023, the chip startup had secured orders worth $900 million for nine supercomputing sites with a 36 exaFLOPS of super sparse AI compute between them. A year later, Cerebras made the jump to TSMC’s 5nm process with the WSE-3 and while memory and bandwidth only saw modest gains, compute once again doubled now topping a 125 petaFLOPS of Sparse (12.5 petaFLOPS dense) compute at 16-bit precision. Cerebras’ CS-3 systems have now seen the largest deployment, and now power the majority of the Condor Galaxy cluster it built for G42, as well as several new sites across North America and Europe. Cerebras' inference inflection Up to mid-2024, Cerebras' primary focus had been on training, but then the company announced a boutique inference-as-a-service offering to rival those from competing chip startups like Groq and SambaNova. It turns out, Cerebras’ latest AI accelerators’ massive SRAM capacity not only made them potent training accelerators but particularly well suited to high-speed LLM inference. In its third iteration, Cerebras' wafer scale accelerators boasted more memory bandwidth than they could realistically use. At 21 PB/s, the chip’s memory is nearly 1000x faster than Nvidia’s new Rubin GPUs. This, along with a dash of speculative decoding, allowed Cerebras to generate tokens far faster than any GPU-based system of the time. Even today, Cerebras routinely ranks among the fastest inference providers in the world. According to Artificial Analysis, Cerebras' kit can churn out more than 2,200 tokens a second when running GPT-OSS 120B High, 2.8x faster than the next closed GPU cloud Fireworks. Cerebras didn’t know it at the time, but its inference platform would be a much bigger business than anyone had expected, and in September 2024, the company submitted its S-1 filing to the SEC to take the company public. Almost exactly a year later, Feldman quietly pulled its S-1, delaying its IPO. His reasons? The company’s initial S-1 filing was rather concerning, as it showed G42 was responsible for 87 percent of its revenues. But in the year since launching its inference platform, Cerebras had racked up several high-profile customer wins from big names like Alphasense, AWS, Cognition, Meta, Mistral AI, Notion, and Perplexity. Feldman explained that the initial S-1 didn’t yet show the financial results of this growth. The company believed it would have a better story to tell investors later down the road. Cerebras' inference platform has only grown since then. The company has steadily expanded its footprint while announcing deeper relationships with AWS and adding OpenAI as a customer. On Thursday, the startup officially joined the NASDAQ under the ticker CBRS, having raised $5.5 billion in the process. Shares skyrocketed nearly 70 percent on the first day of trading, as investors poured their money into a new way to play the AI boom. An IPO is something many startups aspire to but few, especially in the cut throat world of semiconductors, ever accomplish. What happens now From a technical perspective, Cerebras is overdue for a refresh. The WSE-3 accelerators that pushed it over the IPO finish line are getting rather long in the tooth and the architecture lead afforded by its SRAM-heavy design is shrinking. Nvidia’s acquihire of Groq gave Feldman’s long-time rival an SRAM-packed inference platform of its own, while others are racing to catch up. From here, we can only speculate, but we’ll hazard a guess that Cerebras' new shareholders are going to want to see new silicon sooner than later. Based on its existing roadmap, we expect WSE-4 will offer a sizable leap in floating point performance, though not necessarily at 16-bit precision. Much of the industry has aligned around lower precision data types like FP8 and FP4. An exaFLOP of ultra-sparse FP4 compute wouldn’t shock us in the least. How useful sparsity would actually be for LLM inference is another matter. LLM inference hasn’t historically benefited much from sparsity, but that’s never stopped chipmakers from advertising sparse FLOPS anyway. We also expect to see Cerebras pack more SRAM into its next wafer scale compute platform, possibly using TSMC’s 3D chip stacking tech to do it. The WSE-3’s 44GB of SRAM capacity remains a limiting factor for what models it can and can’t serve efficiently. A trillion parameter model like Kimi K2 would require somewhere between 12 and 48 of Cerebras' WSE-3 accelerators, depending on how the model weights are stored and how many parameters have been pruned, and so any increase in SRAM capacity would go a long way toward improving the efficiency of its accelerators. More collaborations Alongside new silicon, we can also expect to see more collaborations akin to Cerebras' tie-up with AWS. Earlier this year, AWS announced it would combine its Trainium3 AI accelerators with Cerebras' WSE-3-based systems to speed up its inference platform in much the same way Nvidia is doing with Groq’s accelerators. Cerebras could certainly do something similar with AMD or any other chipmaker. In this sense, Cerebras is in the position to offer its chips as a decode accelerator, which offloads the bandwidth intensive parts of the inference pipeline onto its chips, while other parts handle the compute heavy prompt processing side of the equation. However, Cerebras frames its next collab; its shareholders are going to expect growth. And as the saying goes, the enemy of my enemy is my friend. ®
Categories: Linux fréttir

Nobody believes the 'criminals and scumbags' who hacked Canvas really deleted stolen student data

Thu, 2026-05-14 22:42
FEATURE When Instructure “reached an agreement” with data theft and extortion crew ShinyHunters this week, the education tech giant assured Canvas users after attackers claimed to have stolen data tied to 275 million students, teachers, and staff that their private chats and email addresses would not turn up on a dark-web marketplace, and that they would not be extorted over the incident. “We received digital confirmation of data destruction (shred logs),” Instructure assured the nearly 9,000 affected universities and K-12 schools. “We have been informed that no Instructure customers will be extorted as a result of this incident, publicly or otherwise.” Not a single responder that The Register spoke with believes this is true. “Do I believe they deleted the data? No. They're criminals and scumbags,” Recorded Future threat intelligence analyst Allan Liska, aka the Ransomware Sommelier, told us. “But, this is part of what Max Smeets calls ‘The Ransomware Trust Paradox,’” he added. “Ransomware groups have to, minimally, not post data they claimed to have deleted or no one will pay them in the future, but this is done knowing that the data is likely not deleted.” Halcyon Ransomware Research Center SVP Cynthia Kaiser, who previously spent two decades at the FBI, said she doesn’t think that anyone who studies ransomware groups’ operations believes the gang actually destroyed the stolen files. “‘We destroyed the data’ is a standard line from extortion groups once a payment is made or negotiations conclude, but time after time it has proven untrue,” Kaiser told The Register. “ShinyHunters in particular has a documented history of recycling, reselling, and re-leveraging stolen data across campaigns – data they claimed was contained from earlier intrusions has resurfaced on criminal forums months and years later.” Kaiser also doesn’t think this is the last threat that the schools will face from the Canvas breach. “Halcyon expects targeted phishing waves against staff, students, and parents over the next six to 12 months using leaked names, email addresses, and Canvas chat context to make the lures convincing,” she said. To be clear: Instructure execs never directly said the company paid the ransom, and we don’t know the exact amount of money the criminals demanded from the digital learning biz. We do know, however, that “reached an agreement” is corporate-speak for the victim paid up. Doug Thompson, chief education architect at cybersecurity firm Tanium, estimates the figure sits somewhere between $5 million and $30 million. Meanwhile, this latest extortion attack illustrates the impossible choice facing organizations entrusted with protecting people’s data when digital thieves breach their networks and steal sensitive information. “The FBI says don’t pay,” Thompson told The Register. “But the operational reality at 3 a.m. during finals week or enrollment season can push institutions toward a very different calculation. Until that incentive structure changes, education is likely to remain unusually vulnerable to extortion pressure.” To pay, or not to pay? The US federal government, law enforcement agencies, and private-sector threat intelligence analysts all advise victims not to pay a ransom. “Paying ransoms rewards and incentivizes the criminals, funding their search for new victims, and I’ve long advocated before for a ban on ransomware payments,” Emsisoft threat analyst Luke Connolly told us. “But in the absence of regulation applying to all organizations, the stark reality is that Instructure faced a crisis, and they negotiated to try to minimize risk and harm.” No company wants to pay a ransom to its attackers, and most say they won’t – at least in principle – because they don’t want to fund criminal operations and incentivize the crooks. There’s also no guarantee that paying will guarantee the return of their data or prevent additional extortion attempts. CrowdStrike surveyed 1,100 global security leaders last summer, and of the 78 percent who said they experienced a ransomware attack in the past year, 83 percent of those that paid ransoms were attacked again. Plus 93 percent lost data regardless of payment. While data suggests that fewer organizations are paying criminals’ ransom demands - Chainalysis found the percentage of paying victims in 2025 dropped to an all-time low of 28 percent, despite attacks hitting record highs - when faced with extortion or a ransomware infection, the "to pay or not to pay" debate becomes much more complicated. “Most organizations still say publicly that they won't pay, and many genuinely don't, but when the alternative is mass downstream harm to students, parents, and thousands of customer institutions, the calculus shifts,” Kaiser said. “Pay-or-leak groups like ShinyHunters specifically engineer that calculus by creating intense financial and reputational pressure, and when demands go unmet, they escalate to direct harassment of victim companies, employees, and clients.” ShinyHunters did just that. The crew initially compromised Instructure in late April, and after the initial pay-or-leak deadline passed on May 6, ShinyHunters switched tactics to school-by-school extortion. They injected a ransom message into about 330 Canvas school login portals, causing Instructure to take the platform offline for a day - during final exams and Advanced Placement testing for many. Other ransomware scum have gone to horrifying extremes, posting pictures and addresses of preschool children in an effort to get a payday, leaking cancer patients’ nude photos and threatening them with swatting attacks. Mandiant Consulting CTO Charles Carmakal previously told The Register that ransomware infections have morphed into "psychological attacks” with crooks SIM swapping executives’ kids to pressure their parents into paying. Calculating risk In addition to responding to criminals directly harassing their students, patients, customers and employees, victim organizations also have to take into account potential lawsuits if the crooks dump individuals’ personal or health data, and the reputational hit from seeing all of this protected information published online. The decision about what to do in a ransomware attack revolves around risk reduction, Liska said. “Not paying a ransom means an increased risk of data exposure, which in this case could cause serious harm,” he told us. “While there is no good decision in most ransomware negotiations, the idea is to protect as many people as possible and that may mean that paying is the least bad option.” While he didn’t respond to or investigate the Instructure case, “protecting children's data is absolutely a critical factor in these types of decisions, especially when the attacks originate from one of the groups associated with The Com,” Liska added. The Com, a loosely knit group of primarily English speakers who are also involved in several interconnected networks of hackers, SIM swappers, and extortionists such as ShinyHunters and Scattered Lapsus$ Hunters, has been known to blackmail kids and teens into carrying out shootings, stabbings, and other real-life criminal acts. “These groups are known to coerce victims using threats of physical harm, including bricking and swatting," he said. "Not paying may have increased the risk of serious harm to the children whose data was exposed.” Ed sector 'more likely to pay' Instructure’s intrusion follows several other high-profile attacks against education-sector software providers. In December 2024, PowerSchool suffered a breach, affecting tens of millions of students. The company reportedly paid about $2.85 million in bitcoin in exchange for a video supposedly showing the attackers destroying the data. But about five months later, in May 2025, the ed-tech provider’s school district customers received individual extortion threats from either the same ransomware crew that hit PowerSchool or someone connected to the crooks. Earlier this year, ShinyHunters claimed it stole data from K-12 software provider Infinite Campus as part of a broader wave of Salesforce-related intrusions. “Education keeps emerging as one of the sectors where organizations are still more likely to pay under pressure,” Thompson said. In addition to students’ – especially minors’ – data containing highly sensitive personal details, and therefore presenting an attractive target for attackers, this is also driven in part by market pressure and economics. It’s costly and inconvenient for schools to switch learning management systems, and they are typically locked into multi-year contracts with these software vendors, according to Thompson. “The other issue is concentration,” he said. “A relatively small number of vendors hold data for enormous portions of the education system. PowerSchool, Infinite Campus, Canvas, Blackboard; those four hold records on something close to every American student, and hackers know it. Three of the four have been breached at a multi-million-record scale in the last 18 months.” Thompson said he expects to see additional attacks against major education platforms to follow. “The economics are good. Instructure paid. PowerSchool paid last year. Every other ed-tech vendor's board just had a conversation about what their number would be,” he told us. “The pattern is established.” According to Connolly, the universities and K-12 schools affected by the Canvas hack shouldn’t consider their data safe, regardless of Instructure’s assurances or the crooks' promises to delete it. “There will be future attacks, without a doubt.” ®
Categories: Linux fréttir

Sick and wrong: Ontario auditors find doctors' AI note takers routinely blow basic facts

Thu, 2026-05-14 20:50
The AI systems approved for Ontario healthcare providers routinely missed critical details, inserted incorrect information, and hallucinated content that neither patients nor clinicians mentioned, according to a provincial audit of 20 approved vendors’ systems. The findings come from the Office of the Auditor General of Ontario, Canada, and are included in a larger report about the state of AI usage by public services in the province. They specifically address the AI Scribe program, the Ontario Ministry of Health initiated for physicians, nurse practitioners, and other healthcare professionals across the broader health sector. As part of the procurement process, officials conducted evaluations using simulated doctor-patient recordings. Medical professionals then reviewed the original recordings alongside the AI-generated notes to evaluate their accuracy. What they found was, frankly, shocking for anyone concerned about the accuracy of AI in critical situations. Nine out of 20 AI systems reportedly “fabricated information and made suggestions to patients' treatment plans” that weren’t discussed in the recordings. According to the report, evaluators spotted potentially devastating incorrect information in the sample reports, such as no masses being found, or patients being anxious, even though these things were never discussed in the recordings. Twelve of the 20 systems evaluated inserted incorrect drug information into patient notes, while 17 of the systems “missed key details about the patients’ mental health issues” that were discussed in the recordings. Six of the systems “missed the patients’ mental health issues fully or partially or were missing key details,” per the report. OntarioMD, a group that offers support for physicians in adopting new technologies and was involved in the AI Scribe procurement process, has recommended that doctors manually review their AI notes for accuracy, but the report notes there’s no mandatory attestation feature in any of the AI Scribe-approved systems. Bad evaluations don’t help, either AI systems making mistakes isn’t exactly shocking. As we’ve reported previously, consumer-focused AI has a tendency to provide bad medical information to users, and some studies have found large language models failed to produce appropriate differential diagnoses in roughly 80 percent of tested cases. But the tools evaluated here are for doctors, not consumers, and such poor performance necessitates explanation. A good portion of the report blames how the systems were evaluated. According to the report, the weight given to various categories of AI Scribe performances was wonky. While 30 percent of a platform’s evaluation score depended solely on whether they had a domestic presence in Ontario, the accuracy of medical notes contributed only 4 percent to the total score. Bias controls accounted for only 2 percent of the total evaluation score; threat, risk, and privacy assessments counted for another 2 percent; and SOC 2 Type 2 compliance contributed an additional 4 percentage points. In other words, criteria tied to accuracy, bias controls, and key security and privacy safeguards made up only a small portion of the total evaluation score for the AI Scribe systems. “Inaccurate weightings could result in the selection of vendors whose AI tools may produce inaccurate or biased medical records or lack adequate protection to safeguard sensitive personal health information,” the report said of the scoring regime. The Register reached out to the Ontario Health Ministry for its take on the report, and whether it was going to conform to its recommendations for the AI Scribe program, but we didn’t immediately hear back. A spokesperson for the Ministry told the CBC on Wednesday that more than 5,000 physicians in Ontario are participating in the AI Scribe program and there have been no known reports of patient harms associated with the technology. ®
Categories: Linux fréttir

Anthropic tosses agents into the API billing pool

Thu, 2026-05-14 20:03
Anthropic has further restricted access to its Claude model family while framing the limitation as responsive customer service. "We've heard your questions about SDK and claude -p usage sharing your subscription rate limits with Claude Code and chat," the company said in a social media post. "Starting June 15, programmatic usage gets its own dedicated budget instead. Your subscription limits don't change, they're now reserved for interactive use." Subscription usage only applies to interactive use of Claude Code, Claude Cowork, and Claude.ai. Interactive mode involves a user typing a prompt and receiving a response. There's a human in the loop. Programmatic interaction, whether via Anthropic's own Agent SDK, headless mode, or a third-party tool, will be counted against a separate usage pool funded by a credit equal to the customer's subscription fee. So a Pro subscriber paying $20 per month will have two token supply chains – one for interactive usage and one for programmatic usage, which the subscriber must claim to obtain. But programmatic usage gets billed at costlier API rates. And if this credit is exhausted, spillover programmatic tokens get billed at (occasionally discounted) API rates through "extra usage," a separate token allotment that, if enabled, exists mainly as a way to avoid a sudden service cutoff and to set a limit on spending. The questions from users arose because Anthropic's prior efforts to prevent customers from gorging on tokens at the all-you-can-eat subscription trough haven't been comprehensive. The AI biz, mindful that it will need to show a profit eventually, has been trying to push customers toward its metered API and to constrain consumption of flat-rate subscription tokens. Microsoft's GitHub Copilot has embarked on a similar transition. Anthropic initially did so by disallowing the use of Claude subscriptions with third-party harnesses – applications like OpenCode that coordinate communication with the backend model. That policy dates back to February 2024, but Anthropic seldom enforced it until earlier this year when demand for AI inference began to outpace the company's Claude supply. In February this year, growing interest in OpenClaw, an open source agent platform that encourages long-running, token-burning tasks, prompted Anthropic to get serious about its ban on using third-party harnesses with Claude subscriptions. But customers wondered about third-party applications built with Anthropic’s own Agent SDK, which hadn't been explicitly disallowed, and about the use of headless mode (claude -p), a way to have Claude work on a task without interaction. They now have their answer. It's worth noting that, if the programmatic credit is not exhausted, it doesn't roll over. It gets lost, or you might say, Anthropic reclaims it. The company refers to the credit using a dollar sign, but it's not redeemable currency. It has already been spent. So customers seeking to get the full value from the new arrangement need to calibrate their programmatic usage to consume the full credit every month, no more and no less. Anthropic's recently announced deal with SpaceX to obtain the compute capacity of its Colossus 1 datacenter, along with its removal of peak-hours usage restrictions, raised hopes among developers that more tolerant usage policies might return. This latest subscription limitation shows that's not happening. ®
Categories: Linux fréttir

Grad-to-be turns graduation cap into Rust-powered light show

Thu, 2026-05-14 17:30
College graduation season has begun in the United States, and one soon-to-graduate computer science student has decided to decorate his graduation cap in the way any good maker would: by writing some Rust code and wiring it up with LEDs that light up when the tassel moves from right to left. Eric Park, due to walk in his commencement ceremony on Friday at Purdue University, published a blog post this week explaining the project, which he said he undertook as an alternative to building a contraption that would set his mortarboard aflame when the tassel was moved. Unfortunately for Park, many American universities (and some in other countries like the UK) require college students who want to walk in commencement ceremonies to rent their gowns and mortarboards. It’s not uncommon for students to be charged a ludicrous amount to rent the set, and in many cases, rental companies require students to return their mortarboards and gowns alike, as is the case for Park. “The rental agreements clause 98.c.2 probably forbids [burning a rented mortarboard], and I don’t think Purdue would like it very much if I set the stage on fire,” Park said in the post. An easier-to-remove version consisting of LED strips, a reed switch, and a magnet, controlled by a super-tiny Digispark ATtiny85, presented itself as the alternative. The result, as demonstrated in a YouTube video, is a mortarboard that is all aglow, and flameless, as soon as the reed switch is activated by the magnet placed on the left-hand side of the hat. “The entire thing was stuck on with double-sided tape and Kapton tape, and I tried a small patch just to make sure it wouldn't rip up the fabric,” Park told The Register in an email. The lightweight and easy-to-remove design also necessitates a compact power source. Unfortunately, Park had to settle for an external battery pack carried in the pocket to power the unit. “It was going to be all self-contained with a 21700 cell, but I didn't have a boost converter on hand so I decided to make do with the power bank solution,” the soon-to-be graduate told us. According to Park, the build was relatively quick: Hardware took a bit more than three hours, and that was largely because he no longer had access to a full lab and was stuck working with his home toolset. Writing the code took a couple of hours, which Park attributed to his insistence on using Rust. “It probably would’ve been easier if I didn’t use Rust and just used the Arduino libraries, or if I used a different board,” Park explained in his blog post. “But I was really married to this blog post title … and I was pretty sure an ESP32 board would’ve been overkill and wouldn’t have stayed on the cap properly.” For those who haven’t clicked through to read his blog post, its headline is simply “my graduation cap runs Rust.” That’s a pretty solid title - at the very least, it’s going to get people to read it, and read they have. “I've read through the comments on Hacker News and I'm happy and thankful about all of the positive comments,” Park told us. “It's great to see a silly but fun project like this reach a wide audience.” “I particularly liked the guy that was reminded why he got into this field through my project,” Park added. So, will Purdue students graduating alongside Park get treated to a surprise light show? Sadly, no - he said in the blog post, and reiterated to us, that he’s probably not going to wear it during the ceremony. “I thought about it but decided it looks pretty tacky,” Park wrote in his blog post. “It looks like what kids would think of as a gaming PC and what boomers would think of as a seizure.” He might toss it on for photo ops after the ceremony, but that’s about it, Park told us. That said, Park did publish the code on Github, so if some other all-but-commenced college student were to take it upon themselves to build their own copy and wear it during their ceremony, that's on them. If I were graduating, I'd consider adding some speakers to the setup and piping in some music, too. Don't come running to El Reg if such a move gets you in trouble, though: We claim no responsibility for commencement shenanigans. ®
Categories: Linux fréttir

KDE bags €1.3M as Europe realizes it might need an OS of its own

Thu, 2026-05-14 15:38
The KDE project turns 30 in five months, but it already got an early birthday present: €1,285,200 from Germany's Sovereign Tech Fund. That's £1.1 million, or $1.5 million in US bucks. The KDE team already has some ideas about how it will spend it, and the project's thank-you note mentions a few: This is not the first time we have mentioned the Sovereign Tech Fund's largesse. In 2023, it gave €1 million to GNOME, and then in 2024 it funded both FreeBSD and Samba. Since then, Donald Trump began his second US presidency, and the push for European digital sovereignty has gained considerably more urgency – as we reported from this year's Open Source Policy Summit in Brussels. KDE Linux is the desktop project's technologically radical in-house distro, which is still in development. We have mentioned this a couple of times, when it was announced in 2024 as "Project Banana," and again in 2025, when it reached alpha. KDE Linux borrows some of its design from Valve's SteamOS 3. Both are immutable distros, based on Arch Linux, with dual Btrfs-formatted root partitions. For failover, these update one another, similarly to ChromeOS (and both obviously use KDE Plasma as their desktop). This has required development work - for instance, before SteamOS, Btrfs required unique partition IDs - and for that, Valve partnered with Spanish workers' cooperative Igalia, which is also working on the Rust-based Servo web rendering engine. For that effort, last year Igalia also received STF funding. SteamOS has millions of users, and ChromeOS hundreds of millions - even if its future replacement is coming into view. The resilience of these OSes in frequent, maintenance-free use is about as well established as end-user-facing Linux gets. One could interpret the STF money as some level of endorsement of the ideas behind KDE Linux. Perhaps it will soon join this short list of European alternatives to Microsoft Windows. Interest in moving European organizations away from American cloud services is growing rapidly, of course. On the small end of the scale, digital artist Wimer Hazenberg recently described How I Moved My Digital Stack to Europe. Taking a broader view, earlier this week, the Financial Times reported on Life without US Tech. It describes how International Criminal Court judge Nicolas Guillou was the target of US sanctions, and found himself locked out of everything that relied on American companies. In October last year, The Register mentioned similar issues faced by ICC prosecutor Karim Khan, when reporting allegations that the ICC was kicking MS Office to the curb. (A few months ago, Microsoft conceded some "inaccuracy" from its spokesperson in that case.) It seems he was not alone. The ICC is moving to OpenDesk from German organization ZenDIS, both of which we mentioned in our report from FOSDEM on messaging systems. These are apps and suites, rather than OSes – they leave the question of the host OS open. That means organizations with large existing investment in Windows (and institutional knowledge of supporting Windows) can keep it for now, while moving to new tools. That's not quick enough for those who want to banish American OSes sooner. Last month, The Reg mentioned France's Directorate for Digital Affairs, DINUM, which is planning to adopt Linux. Some more information is emerging about how it may do it. Rather than building a whole new distro of its own – such as KDE Linux, or the Fedora-based EU OS proposal we looked at last year – DINUM is building a Nix configuration, which it can simply apply to generate a complete bespoke immutable OS image. The base image is called Sécurix. The project page describes it as an OS base for secure workstations, designed according to the ANSSI recommendations for the secure administration of information systems. As an example of how to use it, there's Bureautix. Rather than authenticating against complicated network directories such as LDAP or the Red Hat-backed FreeIPA, Bureautix keeps it local: user configuration is synced from servers to client machines along with the software configuration, and users sign in with a YubiKey. The names Sécurix and Bureautix are nods to the famous indomitable Gauls Astérix and Obélix, created by writer René Goscinny, who died in 1977 aged 51, and artist Albert Uderzo, who died in 2020 at 92. These ancient Gauls have outlived their creators: the latest album, Astérix in Lusitania came out in October 2025, and this vulture recommends it. ®
Categories: Linux fréttir

Pages