TheRegister

Subscribe to TheRegister feed
Articles from www.theregister.com
Updated: 20 min 43 sec ago

Cerebras risked it all on dinner plate-sized AI accelerators a decade ago. Today it’s worth $66 billion

Thu, 2026-05-14 23:02
Cerebras Systems has done what many chip startups aspire to but few ever achieve. On Thursday, the company and long-time Nvidia rival raised $5.55 billion in an initial public offering (IPO), making the company worth more than $66 billion on its first day of trading. The milestone didn’t happen overnight. It took more than a decade, a radically different approach to chipmaking, and two separate attempts at an IPO to pull off. Founded in 2015 by former SeaMicro head Andrew Feldman, Cerebras Systems' first chips looked nothing like GPUs or AI accelerators of the time. The bet that put Cerebras on the map At the time, most high-end GPUs used dies measuring roughly 800 square mm that’d been cut from a larger wafer. Eight or more of these GPUs would typically be stitched together by high-speed interconnects, like NVLink, which allowed them to pool their resources and behave like one big accelerator. Rather than cutting up a wafer into smaller chips just to reconnect them again, Cerebras figured why not etch all that compute into a wafer-sized chip? And so the Wafer-Scale Engine (WSE), a giant chip measuring 46,225 square mm — about the size of a dinner plate — was born. Cerebras' first chips weren’t just bigger; they were purpose-built for AI training and sported a novel compute engine designed to speed up the highly sparse matrix multiply-accumulate operations common in deep learning. This hardware sparsity took advantage of the fact that large portions of a neural network’s parameters ultimately end up being zeros, allowing Cerebras to boost the effective computational output of its first-gen WSE accelerators from 2.65 16-bit petaFLOPS to 26.5. Nvidia added support for sparsity in its Ampere generation a year later, but it only worked for a specific ratio (2:4), limiting its effectiveness to select use cases. To train a model, up to 16 of these chips could be ganged together over a high-speed interconnect. This was kind of important too, because unlike GPUs, which stored model weights in HBM or GDDR memory, Cerebras' chips were almost entirely reliant on on-chip SRAM. Although SRAM is insanely fast, which is why it’s used for caches in basically every modern processor, it’s not particularly space efficient. While Cerebras' first wafer-scale accelerator could theoretically reach 9 petabytes per second of memory bandwidth, it was limited to just 18 GB of capacity at a time when Nvidia was already at 32 GB per GPU and about to make the leap to 40 GB or even 80 GB per chip. Still, the approach was performant enough that for its second-generation wafer-scale accelerator, launched in 2021, Cerebras doubled down on the architecture. While the WSE-2 wasn’t physically larger, the move to TSMC’s 7nm process tech allowed the company to more than double the transistor count, compute density, SRAM capacity, and bandwidth. The chips also supported larger clusters, scaling up to 192, though in practice these clusters were usually smaller at between 16 and 32 systems per site. It was also around this time that Cerebras caught the attention of United Arab Emirates-based cloud provider G42, which quickly became its largest financier. By mid-2023, the chip startup had secured orders worth $900 million for nine supercomputing sites with a 36 exaFLOPS of super sparse AI compute between them. A year later, Cerebras made the jump to TSMC’s 5nm process with the WSE-3 and while memory and bandwidth only saw modest gains, compute once again doubled now topping a 125 petaFLOPS of Sparse (12.5 petaFLOPS dense) compute at 16-bit precision. Cerebras’ CS-3 systems have now seen the largest deployment, and now power the majority of the Condor Galaxy cluster it built for G42, as well as several new sites across North America and Europe. Cerebras' inference inflection Up to mid-2024, Cerebras' primary focus had been on training, but then the company announced a boutique inference-as-a-service offering to rival those from competing chip startups like Groq and SambaNova. It turns out, Cerebras’ latest AI accelerators’ massive SRAM capacity not only made them potent training accelerators but particularly well suited to high-speed LLM inference. In its third iteration, Cerebras' wafer scale accelerators boasted more memory bandwidth than they could realistically use. At 21 PB/s, the chip’s memory is nearly 1000x faster than Nvidia’s new Rubin GPUs. This, along with a dash of speculative decoding, allowed Cerebras to generate tokens far faster than any GPU-based system of the time. Even today, Cerebras routinely ranks among the fastest inference providers in the world. According to Artificial Analysis, Cerebras' kit can churn out more than 2,200 tokens a second when running GPT-OSS 120B High, 2.8x faster than the next closed GPU cloud Fireworks. Cerebras didn’t know it at the time, but its inference platform would be a much bigger business than anyone had expected, and in September 2024, the company submitted its S-1 filing to the SEC to take the company public. Almost exactly a year later, Feldman quietly pulled its S-1, delaying its IPO. His reasons? The company’s initial S-1 filing was rather concerning, as it showed G42 was responsible for 87 percent of its revenues. But in the year since launching its inference platform, Cerebras had racked up several high-profile customer wins from big names like Alphasense, AWS, Cognition, Meta, Mistral AI, Notion, and Perplexity. Feldman explained that the initial S-1 didn’t yet show the financial results of this growth. The company believed it would have a better story to tell investors later down the road. Cerebras' inference platform has only grown since then. The company has steadily expanded its footprint while announcing deeper relationships with AWS and adding OpenAI as a customer. On Thursday, the startup officially joined the NASDAQ under the ticker CBRS, having raised $5.5 billion in the process. Shares skyrocketed nearly 70 percent on the first day of trading, as investors poured their money into a new way to play the AI boom. An IPO is something many startups aspire to but few, especially in the cut throat world of semiconductors, ever accomplish. What happens now From a technical perspective, Cerebras is overdue for a refresh. The WSE-3 accelerators that pushed it over the IPO finish line are getting rather long in the tooth and the architecture lead afforded by its SRAM-heavy design is shrinking. Nvidia’s acquihire of Groq gave Feldman’s long-time rival an SRAM-packed inference platform of its own, while others are racing to catch up. From here, we can only speculate, but we’ll hazard a guess that Cerebras' new shareholders are going to want to see new silicon sooner than later. Based on its existing roadmap, we expect WSE-4 will offer a sizable leap in floating point performance, though not necessarily at 16-bit precision. Much of the industry has aligned around lower precision data types like FP8 and FP4. An exaFLOP of ultra-sparse FP4 compute wouldn’t shock us in the least. How useful sparsity would actually be for LLM inference is another matter. LLM inference hasn’t historically benefited much from sparsity, but that’s never stopped chipmakers from advertising sparse FLOPS anyway. We also expect to see Cerebras pack more SRAM into its next wafer scale compute platform, possibly using TSMC’s 3D chip stacking tech to do it. The WSE-3’s 44GB of SRAM capacity remains a limiting factor for what models it can and can’t serve efficiently. A trillion parameter model like Kimi K2 would require somewhere between 12 and 48 of Cerebras' WSE-3 accelerators, depending on how the model weights are stored and how many parameters have been pruned, and so any increase in SRAM capacity would go a long way toward improving the efficiency of its accelerators. More collaborations Alongside new silicon, we can also expect to see more collaborations akin to Cerebras' tie-up with AWS. Earlier this year, AWS announced it would combine its Trainium3 AI accelerators with Cerebras' WSE-3-based systems to speed up its inference platform in much the same way Nvidia is doing with Groq’s accelerators. Cerebras could certainly do something similar with AMD or any other chipmaker. In this sense, Cerebras is in the position to offer its chips as a decode accelerator, which offloads the bandwidth intensive parts of the inference pipeline onto its chips, while other parts handle the compute heavy prompt processing side of the equation. However, Cerebras frames its next collab; its shareholders are going to expect growth. And as the saying goes, the enemy of my enemy is my friend. ®
Categories: Linux fréttir

Nobody believes the 'criminals and scumbags' who hacked Canvas really deleted stolen student data

Thu, 2026-05-14 22:42
FEATURE When Instructure “reached an agreement” with data theft and extortion crew ShinyHunters this week, the education tech giant assured Canvas users after attackers claimed to have stolen data tied to 275 million students, teachers, and staff that their private chats and email addresses would not turn up on a dark-web marketplace, and that they would not be extorted over the incident. “We received digital confirmation of data destruction (shred logs),” Instructure assured the nearly 9,000 affected universities and K-12 schools. “We have been informed that no Instructure customers will be extorted as a result of this incident, publicly or otherwise.” Not a single responder that The Register spoke with believes this is true. “Do I believe they deleted the data? No. They're criminals and scumbags,” Recorded Future threat intelligence analyst Allan Liska, aka the Ransomware Sommelier, told us. “But, this is part of what Max Smeets calls ‘The Ransomware Trust Paradox,’” he added. “Ransomware groups have to, minimally, not post data they claimed to have deleted or no one will pay them in the future, but this is done knowing that the data is likely not deleted.” Halcyon Ransomware Research Center SVP Cynthia Kaiser, who previously spent two decades at the FBI, said she doesn’t think that anyone who studies ransomware groups’ operations believes the gang actually destroyed the stolen files. “‘We destroyed the data’ is a standard line from extortion groups once a payment is made or negotiations conclude, but time after time it has proven untrue,” Kaiser told The Register. “ShinyHunters in particular has a documented history of recycling, reselling, and re-leveraging stolen data across campaigns – data they claimed was contained from earlier intrusions has resurfaced on criminal forums months and years later.” Kaiser also doesn’t think this is the last threat that the schools will face from the Canvas breach. “Halcyon expects targeted phishing waves against staff, students, and parents over the next six to 12 months using leaked names, email addresses, and Canvas chat context to make the lures convincing,” she said. To be clear: Instructure execs never directly said the company paid the ransom, and we don’t know the exact amount of money the criminals demanded from the digital learning biz. We do know, however, that “reached an agreement” is corporate-speak for the victim paid up. Doug Thompson, chief education architect at cybersecurity firm Tanium, estimates the figure sits somewhere between $5 million and $30 million. Meanwhile, this latest extortion attack illustrates the impossible choice facing organizations entrusted with protecting people’s data when digital thieves breach their networks and steal sensitive information. “The FBI says don’t pay,” Thompson told The Register. “But the operational reality at 3 a.m. during finals week or enrollment season can push institutions toward a very different calculation. Until that incentive structure changes, education is likely to remain unusually vulnerable to extortion pressure.” To pay, or not to pay? The US federal government, law enforcement agencies, and private-sector threat intelligence analysts all advise victims not to pay a ransom. “Paying ransoms rewards and incentivizes the criminals, funding their search for new victims, and I’ve long advocated before for a ban on ransomware payments,” Emsisoft threat analyst Luke Connolly told us. “But in the absence of regulation applying to all organizations, the stark reality is that Instructure faced a crisis, and they negotiated to try to minimize risk and harm.” No company wants to pay a ransom to its attackers, and most say they won’t – at least in principle – because they don’t want to fund criminal operations and incentivize the crooks. There’s also no guarantee that paying will guarantee the return of their data or prevent additional extortion attempts. CrowdStrike surveyed 1,100 global security leaders last summer, and of the 78 percent who said they experienced a ransomware attack in the past year, 83 percent of those that paid ransoms were attacked again. Plus 93 percent lost data regardless of payment. While data suggests that fewer organizations are paying criminals’ ransom demands - Chainalysis found the percentage of paying victims in 2025 dropped to an all-time low of 28 percent, despite attacks hitting record highs - when faced with extortion or a ransomware infection, the "to pay or not to pay" debate becomes much more complicated. “Most organizations still say publicly that they won't pay, and many genuinely don't, but when the alternative is mass downstream harm to students, parents, and thousands of customer institutions, the calculus shifts,” Kaiser said. “Pay-or-leak groups like ShinyHunters specifically engineer that calculus by creating intense financial and reputational pressure, and when demands go unmet, they escalate to direct harassment of victim companies, employees, and clients.” ShinyHunters did just that. The crew initially compromised Instructure in late April, and after the initial pay-or-leak deadline passed on May 6, ShinyHunters switched tactics to school-by-school extortion. They injected a ransom message into about 330 Canvas school login portals, causing Instructure to take the platform offline for a day - during final exams and Advanced Placement testing for many. Other ransomware scum have gone to horrifying extremes, posting pictures and addresses of preschool children in an effort to get a payday, leaking cancer patients’ nude photos and threatening them with swatting attacks. Mandiant Consulting CTO Charles Carmakal previously told The Register that ransomware infections have morphed into "psychological attacks” with crooks SIM swapping executives’ kids to pressure their parents into paying. Calculating risk In addition to responding to criminals directly harassing their students, patients, customers and employees, victim organizations also have to take into account potential lawsuits if the crooks dump individuals’ personal or health data, and the reputational hit from seeing all of this protected information published online. The decision about what to do in a ransomware attack revolves around risk reduction, Liska said. “Not paying a ransom means an increased risk of data exposure, which in this case could cause serious harm,” he told us. “While there is no good decision in most ransomware negotiations, the idea is to protect as many people as possible and that may mean that paying is the least bad option.” While he didn’t respond to or investigate the Instructure case, “protecting children's data is absolutely a critical factor in these types of decisions, especially when the attacks originate from one of the groups associated with The Com,” Liska added. The Com, a loosely knit group of primarily English speakers who are also involved in several interconnected networks of hackers, SIM swappers, and extortionists such as ShinyHunters and Scattered Lapsus$ Hunters, has been known to blackmail kids and teens into carrying out shootings, stabbings, and other real-life criminal acts. “These groups are known to coerce victims using threats of physical harm, including bricking and swatting," he said. "Not paying may have increased the risk of serious harm to the children whose data was exposed.” Ed sector 'more likely to pay' Instructure’s intrusion follows several other high-profile attacks against education-sector software providers. In December 2024, PowerSchool suffered a breach, affecting tens of millions of students. The company reportedly paid about $2.85 million in bitcoin in exchange for a video supposedly showing the attackers destroying the data. But about five months later, in May 2025, the ed-tech provider’s school district customers received individual extortion threats from either the same ransomware crew that hit PowerSchool or someone connected to the crooks. Earlier this year, ShinyHunters claimed it stole data from K-12 software provider Infinite Campus as part of a broader wave of Salesforce-related intrusions. “Education keeps emerging as one of the sectors where organizations are still more likely to pay under pressure,” Thompson said. In addition to students’ – especially minors’ – data containing highly sensitive personal details, and therefore presenting an attractive target for attackers, this is also driven in part by market pressure and economics. It’s costly and inconvenient for schools to switch learning management systems, and they are typically locked into multi-year contracts with these software vendors, according to Thompson. “The other issue is concentration,” he said. “A relatively small number of vendors hold data for enormous portions of the education system. PowerSchool, Infinite Campus, Canvas, Blackboard; those four hold records on something close to every American student, and hackers know it. Three of the four have been breached at a multi-million-record scale in the last 18 months.” Thompson said he expects to see additional attacks against major education platforms to follow. “The economics are good. Instructure paid. PowerSchool paid last year. Every other ed-tech vendor's board just had a conversation about what their number would be,” he told us. “The pattern is established.” According to Connolly, the universities and K-12 schools affected by the Canvas hack shouldn’t consider their data safe, regardless of Instructure’s assurances or the crooks' promises to delete it. “There will be future attacks, without a doubt.” ®
Categories: Linux fréttir

Sick and wrong: Ontario auditors find doctors' AI note takers routinely blow basic facts

Thu, 2026-05-14 20:50
The AI systems approved for Ontario healthcare providers routinely missed critical details, inserted incorrect information, and hallucinated content that neither patients nor clinicians mentioned, according to a provincial audit of 20 approved vendors’ systems. The findings come from the Office of the Auditor General of Ontario, Canada, and are included in a larger report about the state of AI usage by public services in the province. They specifically address the AI Scribe program, the Ontario Ministry of Health initiated for physicians, nurse practitioners, and other healthcare professionals across the broader health sector. As part of the procurement process, officials conducted evaluations using simulated doctor-patient recordings. Medical professionals then reviewed the original recordings alongside the AI-generated notes to evaluate their accuracy. What they found was, frankly, shocking for anyone concerned about the accuracy of AI in critical situations. Nine out of 20 AI systems reportedly “fabricated information and made suggestions to patients' treatment plans” that weren’t discussed in the recordings. According to the report, evaluators spotted potentially devastating incorrect information in the sample reports, such as no masses being found, or patients being anxious, even though these things were never discussed in the recordings. Twelve of the 20 systems evaluated inserted incorrect drug information into patient notes, while 17 of the systems “missed key details about the patients’ mental health issues” that were discussed in the recordings. Six of the systems “missed the patients’ mental health issues fully or partially or were missing key details,” per the report. OntarioMD, a group that offers support for physicians in adopting new technologies and was involved in the AI Scribe procurement process, has recommended that doctors manually review their AI notes for accuracy, but the report notes there’s no mandatory attestation feature in any of the AI Scribe-approved systems. Bad evaluations don’t help, either AI systems making mistakes isn’t exactly shocking. As we’ve reported previously, consumer-focused AI has a tendency to provide bad medical information to users, and some studies have found large language models failed to produce appropriate differential diagnoses in roughly 80 percent of tested cases. But the tools evaluated here are for doctors, not consumers, and such poor performance necessitates explanation. A good portion of the report blames how the systems were evaluated. According to the report, the weight given to various categories of AI Scribe performances was wonky. While 30 percent of a platform’s evaluation score depended solely on whether they had a domestic presence in Ontario, the accuracy of medical notes contributed only 4 percent to the total score. Bias controls accounted for only 2 percent of the total evaluation score; threat, risk, and privacy assessments counted for another 2 percent; and SOC 2 Type 2 compliance contributed an additional 4 percentage points. In other words, criteria tied to accuracy, bias controls, and key security and privacy safeguards made up only a small portion of the total evaluation score for the AI Scribe systems. “Inaccurate weightings could result in the selection of vendors whose AI tools may produce inaccurate or biased medical records or lack adequate protection to safeguard sensitive personal health information,” the report said of the scoring regime. The Register reached out to the Ontario Health Ministry for its take on the report, and whether it was going to conform to its recommendations for the AI Scribe program, but we didn’t immediately hear back. A spokesperson for the Ministry told the CBC on Wednesday that more than 5,000 physicians in Ontario are participating in the AI Scribe program and there have been no known reports of patient harms associated with the technology. ®
Categories: Linux fréttir

Anthropic tosses agents into the API billing pool

Thu, 2026-05-14 20:03
Anthropic has further restricted access to its Claude model family while framing the limitation as responsive customer service. "We've heard your questions about SDK and claude -p usage sharing your subscription rate limits with Claude Code and chat," the company said in a social media post. "Starting June 15, programmatic usage gets its own dedicated budget instead. Your subscription limits don't change, they're now reserved for interactive use." Subscription usage only applies to interactive use of Claude Code, Claude Cowork, and Claude.ai. Interactive mode involves a user typing a prompt and receiving a response. There's a human in the loop. Programmatic interaction, whether via Anthropic's own Agent SDK, headless mode, or a third-party tool, will be counted against a separate usage pool funded by a credit equal to the customer's subscription fee. So a Pro subscriber paying $20 per month will have two token supply chains – one for interactive usage and one for programmatic usage, which the subscriber must claim to obtain. But programmatic usage gets billed at costlier API rates. And if this credit is exhausted, spillover programmatic tokens get billed at (occasionally discounted) API rates through "extra usage," a separate token allotment that, if enabled, exists mainly as a way to avoid a sudden service cutoff and to set a limit on spending. The questions from users arose because Anthropic's prior efforts to prevent customers from gorging on tokens at the all-you-can-eat subscription trough haven't been comprehensive. The AI biz, mindful that it will need to show a profit eventually, has been trying to push customers toward its metered API and to constrain consumption of flat-rate subscription tokens. Microsoft's GitHub Copilot has embarked on a similar transition. Anthropic initially did so by disallowing the use of Claude subscriptions with third-party harnesses – applications like OpenCode that coordinate communication with the backend model. That policy dates back to February 2024, but Anthropic seldom enforced it until earlier this year when demand for AI inference began to outpace the company's Claude supply. In February this year, growing interest in OpenClaw, an open source agent platform that encourages long-running, token-burning tasks, prompted Anthropic to get serious about its ban on using third-party harnesses with Claude subscriptions. But customers wondered about third-party applications built with Anthropic’s own Agent SDK, which hadn't been explicitly disallowed, and about the use of headless mode (claude -p), a way to have Claude work on a task without interaction. They now have their answer. It's worth noting that, if the programmatic credit is not exhausted, it doesn't roll over. It gets lost, or you might say, Anthropic reclaims it. The company refers to the credit using a dollar sign, but it's not redeemable currency. It has already been spent. So customers seeking to get the full value from the new arrangement need to calibrate their programmatic usage to consume the full credit every month, no more and no less. Anthropic's recently announced deal with SpaceX to obtain the compute capacity of its Colossus 1 datacenter, along with its removal of peak-hours usage restrictions, raised hopes among developers that more tolerant usage policies might return. This latest subscription limitation shows that's not happening. ®
Categories: Linux fréttir

Grad-to-be turns graduation cap into Rust-powered light show

Thu, 2026-05-14 17:30
College graduation season has begun in the United States, and one soon-to-graduate computer science student has decided to decorate his graduation cap in the way any good maker would: by writing some Rust code and wiring it up with LEDs that light up when the tassel moves from right to left. Eric Park, due to walk in his commencement ceremony on Friday at Purdue University, published a blog post this week explaining the project, which he said he undertook as an alternative to building a contraption that would set his mortarboard aflame when the tassel was moved. Unfortunately for Park, many American universities (and some in other countries like the UK) require college students who want to walk in commencement ceremonies to rent their gowns and mortarboards. It’s not uncommon for students to be charged a ludicrous amount to rent the set, and in many cases, rental companies require students to return their mortarboards and gowns alike, as is the case for Park. “The rental agreements clause 98.c.2 probably forbids [burning a rented mortarboard], and I don’t think Purdue would like it very much if I set the stage on fire,” Park said in the post. An easier-to-remove version consisting of LED strips, a reed switch, and a magnet, controlled by a super-tiny Digispark ATtiny85, presented itself as the alternative. The result, as demonstrated in a YouTube video, is a mortarboard that is all aglow, and flameless, as soon as the reed switch is activated by the magnet placed on the left-hand side of the hat. “The entire thing was stuck on with double-sided tape and Kapton tape, and I tried a small patch just to make sure it wouldn't rip up the fabric,” Park told The Register in an email. The lightweight and easy-to-remove design also necessitates a compact power source. Unfortunately, Park had to settle for an external battery pack carried in the pocket to power the unit. “It was going to be all self-contained with a 21700 cell, but I didn't have a boost converter on hand so I decided to make do with the power bank solution,” the soon-to-be graduate told us. According to Park, the build was relatively quick: Hardware took a bit more than three hours, and that was largely because he no longer had access to a full lab and was stuck working with his home toolset. Writing the code took a couple of hours, which Park attributed to his insistence on using Rust. “It probably would’ve been easier if I didn’t use Rust and just used the Arduino libraries, or if I used a different board,” Park explained in his blog post. “But I was really married to this blog post title … and I was pretty sure an ESP32 board would’ve been overkill and wouldn’t have stayed on the cap properly.” For those who haven’t clicked through to read his blog post, its headline is simply “my graduation cap runs Rust.” That’s a pretty solid title - at the very least, it’s going to get people to read it, and read they have. “I've read through the comments on Hacker News and I'm happy and thankful about all of the positive comments,” Park told us. “It's great to see a silly but fun project like this reach a wide audience.” “I particularly liked the guy that was reminded why he got into this field through my project,” Park added. So, will Purdue students graduating alongside Park get treated to a surprise light show? Sadly, no - he said in the blog post, and reiterated to us, that he’s probably not going to wear it during the ceremony. “I thought about it but decided it looks pretty tacky,” Park wrote in his blog post. “It looks like what kids would think of as a gaming PC and what boomers would think of as a seizure.” He might toss it on for photo ops after the ceremony, but that’s about it, Park told us. That said, Park did publish the code on Github, so if some other all-but-commenced college student were to take it upon themselves to build their own copy and wear it during their ceremony, that's on them. If I were graduating, I'd consider adding some speakers to the setup and piping in some music, too. Don't come running to El Reg if such a move gets you in trouble, though: We claim no responsibility for commencement shenanigans. ®
Categories: Linux fréttir

KDE bags €1.3M as Europe realizes it might need an OS of its own

Thu, 2026-05-14 15:38
The KDE project turns 30 in five months, but it already got an early birthday present: €1,285,200 from Germany's Sovereign Tech Fund. That's £1.1 million, or $1.5 million in US bucks. The KDE team already has some ideas about how it will spend it, and the project's thank-you note mentions a few: This is not the first time we have mentioned the Sovereign Tech Fund's largesse. In 2023, it gave €1 million to GNOME, and then in 2024 it funded both FreeBSD and Samba. Since then, Donald Trump began his second US presidency, and the push for European digital sovereignty has gained considerably more urgency – as we reported from this year's Open Source Policy Summit in Brussels. KDE Linux is the desktop project's technologically radical in-house distro, which is still in development. We have mentioned this a couple of times, when it was announced in 2024 as "Project Banana," and again in 2025, when it reached alpha. KDE Linux borrows some of its design from Valve's SteamOS 3. Both are immutable distros, based on Arch Linux, with dual Btrfs-formatted root partitions. For failover, these update one another, similarly to ChromeOS (and both obviously use KDE Plasma as their desktop). This has required development work - for instance, before SteamOS, Btrfs required unique partition IDs - and for that, Valve partnered with Spanish workers' cooperative Igalia, which is also working on the Rust-based Servo web rendering engine. For that effort, last year Igalia also received STF funding. SteamOS has millions of users, and ChromeOS hundreds of millions - even if its future replacement is coming into view. The resilience of these OSes in frequent, maintenance-free use is about as well established as end-user-facing Linux gets. One could interpret the STF money as some level of endorsement of the ideas behind KDE Linux. Perhaps it will soon join this short list of European alternatives to Microsoft Windows. Interest in moving European organizations away from American cloud services is growing rapidly, of course. On the small end of the scale, digital artist Wimer Hazenberg recently described How I Moved My Digital Stack to Europe. Taking a broader view, earlier this week, the Financial Times reported on Life without US Tech. It describes how International Criminal Court judge Nicolas Guillou was the target of US sanctions, and found himself locked out of everything that relied on American companies. In October last year, The Register mentioned similar issues faced by ICC prosecutor Karim Khan, when reporting allegations that the ICC was kicking MS Office to the curb. (A few months ago, Microsoft conceded some "inaccuracy" from its spokesperson in that case.) It seems he was not alone. The ICC is moving to OpenDesk from German organization ZenDIS, both of which we mentioned in our report from FOSDEM on messaging systems. These are apps and suites, rather than OSes – they leave the question of the host OS open. That means organizations with large existing investment in Windows (and institutional knowledge of supporting Windows) can keep it for now, while moving to new tools. That's not quick enough for those who want to banish American OSes sooner. Last month, The Reg mentioned France's Directorate for Digital Affairs, DINUM, which is planning to adopt Linux. Some more information is emerging about how it may do it. Rather than building a whole new distro of its own – such as KDE Linux, or the Fedora-based EU OS proposal we looked at last year – DINUM is building a Nix configuration, which it can simply apply to generate a complete bespoke immutable OS image. The base image is called Sécurix. The project page describes it as an OS base for secure workstations, designed according to the ANSSI recommendations for the secure administration of information systems. As an example of how to use it, there's Bureautix. Rather than authenticating against complicated network directories such as LDAP or the Red Hat-backed FreeIPA, Bureautix keeps it local: user configuration is synced from servers to client machines along with the software configuration, and users sign in with a YubiKey. The names Sécurix and Bureautix are nods to the famous indomitable Gauls Astérix and Obélix, created by writer René Goscinny, who died in 1977 aged 51, and artist Albert Uderzo, who died in 2020 at 92. These ancient Gauls have outlived their creators: the latest album, Astérix in Lusitania came out in October 2025, and this vulture recommends it. ®
Categories: Linux fréttir

Waymo recalls 3,800 robotaxis after one drove itself into a flood

Thu, 2026-05-14 15:08
Waymo is recalling almost 3,800 robotaxis amid fears they may go off-script and drive into floods on high-speed roads. All 3,791 cars running Waymo’s fifth and sixth-generation Automated Driving Systems (ADS) are being taken off the road before they potentially injure passengers. "The software may allow the vehicle to slow and then drive into standing water on higher speed roadways," Waymo said in a letter [PDF] to the National Highway Traffic Safety Administration (NHTSA) this week. "Entering a flooded roadway can cause a loss of vehicle control, increasing the risk of a crash or injury." The Alphabet-owned robotaxi biz said all affected cars received an update on April 20, which increased "weather-related constraints and updated the vehicle maps," which served as an "interim remedy" while it works on a more permanent solution. This coincided with a case in San Antonio, Texas, on April 20, in which a car was caught on video - shared with broadcaster KSAT 12 - driving into floodwater and becoming stuck. “On 4/20/2026, an unoccupied Waymo AV encountered an untraversable flooded section of a roadway that has a 40 mph speed limit,” the company wrote in one document [PDF] supporting the recall notice. “The Waymo AV detected potentially untraversable flood water and proceeded at reduced speed.” Waymo temporarily suspended its services in San Antonio as a result and started pulling cars from the city’s fleet days after. The suspension remains in place today. The Register asked Waymo for more information. The company currently operates 24/7 driverless robotaxi services in Dallas, Houston, Los Angeles, Miami, Nashville, Orlando, Phoenix, and the San Francisco Bay Area. Waymo has also set its sights on launching in London in September, its first foray outside the US, pending necessary regulatory changes that would allow driverless cars to operate in the city. Test cars have already been spotted on the capital’s streets with trained experts behind the wheel, should any of the cars encounter issues, much like the deal Waymo agreed to in New York when the state handed its testing license back. As The Register previously reported, given the differences in the roads and other motoring infrastructure between the US and UK, Waymo will have to overcome unique challenges before opening its car doors to the public. In testing these vehicles now, Waymo is building a base of evidence to support its bid to operate in the UK. In recent years, however, the company has had to tackle some tricky PR hiccups, mainly related to safety – an issue that autonomous car companies often claim their tech will help improve, not hinder. Reports of serious issues, including cars ignoring red lights and veering into moving traffic, and killing dogs, sit alongside evidence of the technology helping to avoid potential freeway pile-ups, like a recent Waymo case study in LA shows. Serious issues continue to plague cars, and while they attract more media scrutiny than equivalent human-driver mishaps, public trust will remain strained until cases become far rarer. ®
Categories: Linux fréttir

UK begins antitrust inquiry into Microsoft's business software ecosystem

Thu, 2026-05-14 14:15
The UK’s Competition and Markets Authority (CMA) is taking a closer look at Microsoft’s business software empire, launching a strategic market status investigation into the company’s ecosystem. The probe, which is the fourth since the UK's digital markets competition regime came into force last year, will determine whether Microsoft should be designated as having strategic market status, which would allow the CMA to implement interventions to support competition. In March, the CMA announced that the investigation was coming. The regulator was concerned that Microsoft's software licensing practices were reducing competition in the cloud. In today's announcement, the CMA said it had "heard that UK customers may not always be able to effectively combine software from Microsoft with that of other providers, limiting their ability to get access to the best products at the most competitive prices." Microsoft is no stranger to regulatory friction. In 2025, it described calls from AWS and Google for the UK competition regulator to "intervene and constrain the price" it charges customers to run wares on those rivals' cloud plaforms as "extraordinary and unprecedented." Two year prior, Google branded Microsoft's cloud software licensing a "tax" paid by customers as a penalty for not running Microsoft software on Azure infrastructure. It claims that Microsoft charges up to four times more, for example, to run Windows Server on GCP. AWS has previously moaned about this too. As well as assessing whether Microsoft is using its position to limit customer choice, the CMA investigation "includes looking at how AI competitors are able to integrate with Microsoft's business software, giving customers access to AI software across suppliers to best suit their needs." Microsoft is pushing Copilot AI into as many Microsoft 365 subscriptions as it can, even creating a new tier, E7, aimed specifically at AI services. In a statement, Nicky Stewart, senior advisor to the Open Cloud Coalition - a trade association Microsoft previously dismissed as a Google lobby group - said: "This investigation needs to be both rapid and conclusive. It must address Microsoft's unfair licensing practices once and for all, giving the UK cloud market a level playing field and the confidence to innovate and invest for the long term." Reg readers should not expect results anytime soon. It took 21 months for the CMA to publish the results of an investigation into the UK cloud services market, in which it said Microsoft and AWS were using their dominance to harm UK cloud customers. It claimed Microsoft, for example, could have charged UK enterprise customers £500 million more annually to run its wares in AWS and Google clouds than they'd have paid to run them in Azure. A key concern from that investigation - whether Microsoft's software licensing practices were reducing competition in cloud services - has informed this one. This latest inquiry must be completed within nine months, and a decision on designating Microsoft with SMS is scheduled to be reached by February 2027. For its part, a Microsoft spokesperson told The Register, "We are committed to working quickly and constructively with the CMA to facilitate its review of the business software market." The investigation will be wide-ranging, encompassing productivity applications, operating systems, databases, and security software. Sarah Cardell, Chief Executive of the CMA, said, "Our aim is to understand how these markets are developing, Microsoft's position within them and to consider what, if any, targeted action may be needed to ensure UK organizations can benefit from choice, innovation and competitive prices." Authorities in the US, Europe, Brazil, South Africa and Japan are also closely monitoring Microsoft's licensing policies. ®
Categories: Linux fréttir

AI to infest eight in ten premium phones within two years

Thu, 2026-05-14 14:02
AI will be in the majority of premium smartphones and wearables within a few years - bad news for anyone who doesn't like or trust the overhyped pixie dust. Counterpoint Research forecasts that more than 80 percent of premium smartphones will have agentic AI capabilities by 2027, while a similar proportion of so-called wearable devices are on track to be AI-enabled by 2032. To some degree, this appears to be a push from the vendors, who see AI as a "premium" feature to justify the inflating price tag attached to devices. Counterpoint says that MediaTek became the first chipset maker to commercialize agentic AI capabilities via its Dimensity 9400 series, followed by Qualcomm with the Snapdragon 8 Elite Gen 5 and Snapdragon 8 Gen 5 platforms. This marked the start of a new smartphone technology cycle in which devices increasingly shifted from sporting AI assistants to boasting "autonomous, context-aware AI experiences," Counterpoint claims. It defines an agentic AI smartphone as one capable of running software agents that can understand context, plan actions, make decisions, and execute multi-step tasks on behalf of the user. This places more emphasis on memory bandwidth and sustained AI throughput rather than just having a neural processing unit (NPU) to boost processing, hence the appearance of newer silicon designed with agentic AI in mind. With the memory shortage pushing up the price of phones, the device makers also need something to convince buyers to part with more of their hard-earned cash. "We expect one in three smartphones sold in 2027 to have agentic AI capability, driven by both premium (>$600) and mid-high ($250-$600) price tier smartphones," says Counterpoint research vice president Peter Richardson. However, for premium devices, the figure is 80 percent or higher, and the bigger opportunity will open up when these features start reaching mid-tier smartphones at scale, the firm forecasts. Not everyone welcomes AI in their personal gadgets. One UK used device biz reported a slump in demand for pre-owned Samsung Galaxy phones since the firm started adding AI capabilities. The figure of 80 percent crops up again in wearables, where the proportion of AI-capable devices is projected to rise from 30 percent in 2025 to nearly 80 percent by 2032. This represents a trillion-dollar revenue opportunity for the vendors, Counterpoint believes. Wearables - smartwatches, health monitors and the like - increasingly execute inference workloads locally, with models trained in the cloud then deployed onto the device. This shifts latency-sensitive functions, such as continuous health monitoring, gesture recognition, and contextual awareness to the device itself while improving privacy by cutting back on sensitive biometric information sent to the cloud, according to Counterpoint. Smartwatches and wireless earbuds are forecast to remain the largest categories by unit volume through 2032, with the latter gaining AI-driven features such as real-time language translation, speaker identification, and personalized hearing adaptation. Counterpoint expects smart rings (no giggling at the back there) to be the fastest-growing segment. This is because constantly worn items can continuously track health signals including heart rate variability, sleep stages, and stress. Revenue from AI-enabled wearables is forecast to grow at an average of 21 percent annually between now and 2032. ®
Categories: Linux fréttir

Dude… where’s my password? Claude reunites forgetful stoner with $400k Bitcoin stash

Thu, 2026-05-14 13:30
Eleven years ago, a stoner bought some Bitcoin, lit up, and entered a password that he soon forgot. Now, after searching for more than a decade, Claude AI has helped him figure out the credentials he needed to gain access to a crypto wallet containing currency that is now worth a whopping $400,000. The man, who retains an anonymous online profile only going by the alias “cprkrn,” vowed to name his progeny after Anthropic’s CEO Dario Amodei, all because the AI tool helped him regain access to an Obama-era wallet he thought was impenetrable. Armed only with an old mnemonic phrase, the man plugged it into Claude and told the AI to search his computer for ways he could use it to figure out the password that could regain access to the 5 Bitcoins he bought in 2015 at a Starbucks. He told web show MTSlive that he had two of the three passwords needed to open up the wallet, but couldn’t find the crucial third after changing it, and naturally later forgetting it, while he was high. He said he bought the tokens when the price for each was around $250. Altogether, his Bitcoin stash is now worth just shy of $400,000. After eight weeks working to crack the password, and after the man gave it access to his old computer used for college work, Claude found a wallet backup that the mnemonic phrase was able to decrypt. According to an overview of the mission, written by Claude, accessing the wallet backup gave the man access to the private keys required to access the Blockchain.com wallet. Looking at the wallet’s transaction history shows the funds lying dormant since April 2015, and then being transferred out on Wednesday. Previous attempts to regain access to the wallet involved brute forcing password strings, 3.5 trillion of them by Claude’s reckoning, all to no avail. He even traveled back to his parents’ house to retrieve college notebooks, manually entering "anything that looked like password or a seed phrase" he thought might help the AI crack or find the third password. The man ran Claude for eight weeks to realise he changed the password 11 years ago, while stoned, to “lol420fuckthePOLICE!*:)”. This is a stellar case study to highlight the value of complex passwords, if there ever was one. ®
Categories: Linux fréttir

Anthropic’s Bun Rust rewrite merged at speed of AI

Thu, 2026-05-14 13:01
A pull request with a Rust version of Anthropic’s Bun, a JavaScript toolkit and runtime originally written in Zig, has been merged to the main Bun repository. Thos comes just days after its author, Jared Sumner, said "there's a very high chance all this code gets thrown out." Sumner posted on X (formerly Twitter) five days ago that "99.8 percent of bun's pre-existing test suite passes on Linux x64f glibc in the rust rewrite," a clue that what was initially described as an experiment was likely to make it to production. Three days later, the Bun team released version 1.3.14, with Sumner stating that if the Rust rewrite was merged, "this would be the last version in Zig." Today that merge took place, adding more than one million lines of code. Sumner said it passes Bun's test suite on all platforms, fixes some memory leaks, and shrinks the binary size by between 3 and 8 MB. "Most importantly, we now have compiler-assisted tools for catching and preventing memory bugs, which have cost the team an enormous amount of development and debugging time over the years," he said in a comment. Performance is either neutral or faster, he said, though the codebase is "the same architecture, the same data structures." No async Rust is used. Bun users have hit memory leak issues when deploying it as a production runtime. According to Sumner, "Rust won’t catch all of these - leaks from holding references too long and anything that re-enters across the JS boundary are still on us. But a large percentage of that list is use-after-free, double-free, and forgot-to-free-on-error-path, and those become compile errors or automatic cleanup." A second pull request, removing upwards of 600,000 lines of Zig code, was automatically flagged by GitHub as "AI slop" and closed, but will presumably reappear in some form. The size of these commits makes them near-impossible for humans to review. "What a nice reviewable little commit. I'm sure it will not contain any bugs," said one comment on the Rust merge. Although the idea of the Rust port has been well received, the speed of the transition has taken the community by surprise. In normal circumstances, porting a major project so quickly would be risky, but this has been accomplished using AI tools. According to Sumner, it is "essentially the same codebase ported to Rust." Asked whether the Rust version would be maintained mainly by Anthropic’s Claude Code, Sumner said "this is already the status quo; we haven’t been typing code ourselves for many months now. Even pre-acquisition [by Anthropic] this was pretty much accurate." Sumner was formerly a strong Zig advocate, but Zig’s no-AI policy is at odds with the Bun team’s way of working, and recent versions of Bun use a Zig fork with contributions that cannot be merged upstream, and which Zig’s maintainers said would not be welcome regardless of the AI aspect. Version 1.3.14, the last one still to use Zig, adds a built-in image processing API for decoding, transforming and encoding images. It is designed as a drop-in replacement for the Sharp image processing library for Node.js. The new release also adds experimental support for the HTTP/3 (QUIC) protocol in Bun’s integrated server. The full release notes describe these and other new features. Is it possible to move this fast and not break things? Bun's migration from Zig to Rust will be watched with interest by AI advocates and sceptics alike. ®
Categories: Linux fréttir

Americans would rather have a nuclear plant in their backyard than a datacenter

Thu, 2026-05-14 12:30
The majority of Americans are now opposed to datacenters being built in their area, many strongly opposed, pointing to tough times ahead for site developers. A Gallup survey found more than 70 percent of respondents indicate they would be against the construction of an AI datacenter in their neighboorhood, with almost half (48 percent) saying they were strongly opposed. Only 27 percent were in favor. The polling shows how quickly AI server farms have become politically toxic in the US, not helped by stories about their effects on energy bills, slurping up water supplies, and creating air and noise pollution in their vicinity. To highlight this, Gallup found that more US residents are opposed to massive data halls than to having a nuclear power plant in their backyard: 53 percent of Americans oppose building a nuclear energy site nearby, compared with the 71 percent against datacenter construction. When it comes to the reasons for opposing AI campuses, half of all respondents cite the effect on resources, with excess water usage and potential power grid constraints topping the list. Concern about loss of farmland and nature was surprisingly low, with just 7 percent mentioning this, but it is possible the scores are higher in rural areas. Quality-of-life concerns such as increased traffic were put forward by nearly a quarter, while a fifth mentioned higher utility bills. Many were worried about AI specifically: that it would replace human workers, that they don't trust it, that it is moving too fast, and that the industry needs regulating. Perhaps the latter sentiment is why President Trump appears to have shifted his own position on the need for AI regulations. Conversely, those in favor of datacenters cite economic benefits, with 55 percent mentioning increased job opportunities, and 13 percent saying it is because of increased tax revenues. However, these people are perhaps laboring under some delusions, as datacenters generally deliver few long-term local jobs once they are operational, and far from increasing tax revenue, many benefit from generous tax subsidy schemes that are costing some individual US states upward of $1 billion in lost income each year. This being America in 2026, Gallup looked at how attitudes stack up depending on political affiliation. It found that Democrats, at 56 percent, are much more likely than Republicans to be strongly opposed to a server farm in their vicinity. But 39 percent of Republicans are also strongly opposed, while another 24 percent are somewhat averse to it, and only about a third are in favor. Gallup points out the contradiction: for AI usage to expand in the US, facilities that can handle the necessary computing power will have to be built. But most Americans appear to take a "not in my backyard" attitude to new bit barns, and that attitude has grown in strength. The Register noted this last year, when Emma Fryer, public policy director for datacenter operator CyrusOne, said: "People don't make a connection between the digital services they depend on every minute of every day of their lives and the fact that providing them every minute of every day of their lives requires industrial-scale infrastructure." She was speaking during a discussion of the industry's image problem at the Datacloud Global Congress event in Cannes, France. Garry Connolly, founder of Digital Infrastructure Ireland, told the same audience: "Most people are fucking scared of AI, like we're feeding a monster." Telling the public that all those massive datacenters are needed for AI is therefore not a winning argument. ®
Categories: Linux fréttir

ZTE and Telkom Indonesia sign strategic MoU to accelerate digital solutions and infrastructure development

Thu, 2026-05-14 12:11
Partner Content ZTE Corporation (0763.HK / 000063.SZ), a global leading provider of integrated information and communication technology solutions, has officially signed a Memorandum of Understanding (MoU) with PT Telkom Indonesia (Persero) Tbk to strengthen strategic cooperation in the development of digital solutions and infrastructure. The MoU marks a significant milestone in the long-standing partnership between ZTE and Telkom, reinforcing both parties' commitment to accelerating Indonesia's digital transformation through the deployment of advanced technologies, including cloud computing, artificial intelligence (AI), and next-generation connectivity. Through this collaboration, ZTE will leverage its global capabilities in digital infrastructure, AI-driven solutions, and integrated platforms to support Telkom in enhancing its digital ecosystem. The partnership is expected to accelerate innovation, strengthen service capabilities, and enable more scalable and secure digital solutions for enterprise and government sectors. Zhu Yang, Sales Director of ZTE Indonesia, stated, "We are honoured to strengthen our collaboration with Telkom Indonesia, a key digital ecosystem enabler in Southeast Asia. This partnership reflects our shared vision to build intelligent, efficient, and sustainable digital infrastructure. By combining ZTE's technological expertise with Telkom's strong market presence, we aim to unlock new value and support Indonesia's digital economy growth." From Telkom's perspective, this collaboration aligns with the company's broader transformation strategy to evolve beyond a traditional telecommunications operator into a digital infrastructure and platform-driven enterprise. Seno Soemadji, Director of Strategic Business Development & Portfolio PT Telkom Indonesia (Persero) Tbk, emphasized that strategic partnerships play a critical role in accelerating the company's long-term growth agenda. "This collaboration reflects our continued focus on strengthening digital infrastructure as a foundation for future growth. Moving forward, Telkom is committed to scaling its capabilities across data center, connectivity, and cloud-based platforms, while embedding AI as a core enabler to deliver more integrated and high-value solutions for our customers. Through partnerships like this, we aim to build a more resilient, secure, and competitive digital ecosystem in Indonesia and the region," he said. The cooperation also supports Telkom's ongoing efforts to sharpen its portfolio focus and enhance execution discipline, ensuring that each initiative contributes to sustainable value creation and long-term competitiveness. Looking ahead, ZTE and Telkom will explore various collaboration areas, including digital infrastructure development, enterprise solutions, AI-enabled services, and capability building, to support the evolving needs of Indonesia's digital economy. Contributed by ZTE.
Categories: Linux fréttir

NASA fleshes out Artemis III, the Moon mission that won't go to the Moon

Thu, 2026-05-14 11:59
Artemis III is currently targeted for late 2027, and NASA has shared some of its plans for the mission, though exactly how SpaceX and Blue Origin will participate remains unclear. The mission to low Earth orbit will be launched with a "spacer" rather than the Interim Cryogenic Propulsion Stage (ICPS) that would otherwise be used on lunar voyages to send the Orion capsule to the Moon. According to NASA, the crew will spend more time in the Orion capsule than the Artemis II astronauts to further test the spacecraft's life support system. NASA will also demonstrate the docking system alongside an upgraded heat shield. As for the lunar lander, NASA has remained tight-lipped, only saying that operations would be "informed by Blue Origin and SpaceX capabilities." However, the agency stated that astronauts could potentially enter "at least one lander test article." There might also be an opportunity to evaluate the interfaces of Axiom's AxEMU spacesuit. There could, in theory, be three launches during the Artemis III mission: one for Orion, atop the SLS (the core stage of which is in NASA's Vehicle Assembly Building), with separate launches for SpaceX's Starship human landing system pathfinder and Blue Origin's Blue Moon Mark 2 landing system pathfinder. Without an ICPS, the European-built Orion service module will provide propulsion to circularize the spacecraft's orbit. Artemis III was supposed to mark a crewed return to the lunar surface, but was changed earlier this year to be a test of commercial lunar lander technologies in low Earth orbit. Jeremy Parsons of NASA's Exploration Systems Development Mission Directorate called the development a "stepping stone" to a lunar landing, saying: "For the first time, NASA will coordinate a launch campaign involving multiple spacecraft integrating new capabilities into Artemis operations." Kind of. In 1965, NASA launched the first crewed flight of the Gemini program. Several stages in the program involved launching another spacecraft – the Agena target vehicle – followed by a crewed Gemini launch to demonstrate rendezvous and docking techniques. The final crewed flight, Gemini 12, was launched less than two hours after the Agena [PDF]. While NASA is unlikely to manage that sort of quick-fire launch cadence, the agency will also expect to avoid a repeat of the infamous Gemini 8 incident, in which a stuck thruster almost resulted in the loss of astronauts David Scott and Neil Armstrong. ®
Categories: Linux fréttir

Cops arrest man suspected of being Dream Market kingpin

Thu, 2026-05-14 11:26
A man police suspected of being the administrator of the former leading online drug bazaar Dream Market is facing charges in both his native Germany and the US following his arrest earlier this month. Prosecutors claim Owe Martin Andresen, 49, is the individual known by the “Speedstepper” alias, one of the few Dream Market admins identified by law enforcement in the 2019 attempts to shutter the platform. While other crime leaders on the platform have been convicted, it took the authorities years to identify their latest suspect, whom they believe was main admin of the website. Authorities said they tracked him down by monitoring crypto wallets, and tracking purchases of gold bars that the indictment claims were delivered to his home address. Other lower-level admins have long been convicted, including French national Gal Vallerius, who was sentenced to 20 years in prison a year after being arrested at Atlanta airport in 2017 on his way to attend the World Beard and Mustache Championships (yes, really). Andresen was arrested by German police on May 7 after the US indicted him in January, charging him with several counts of money laundering offenses. He faces similar charges in Germany. Authorities spent years gathering small pieces of evidence that eventually tied Andresen to Dream Market’s helm. After the platform shut down in 2019 amid mounting pressure from law enforcement, none of the suspected admins touched Dream’s infrastructure, including the operation’s known cryptocurrency wallets, which contained millions of dollars’ worth of tokens. Three years later, between November and December 2022, Andresen allegedly accessed these numerous wallets and transferred the contents into a single, consolidated one - a step only someone with access to Dream’s private key could carry out. Police believe this was Speedstepper. The next breadcrumb came almost a year later, when in August 2023, Andresen allegedly used an Atlanta-based cryptocurrency service provider to purchase gold bars from various international companies using the funds from the consolidated wallet. The indictment claims he had those gold bars shipped directly to his house in Germany, instead of choosing a more neutral, less compromising location. Between then and April 2025, German police believe they have identified several other money laundering schemes executed by Andresen, washing more than $2 million in the process. Upon his arrest on May 7, police searched Andresen’s residence “and two other locations,” at which officers found gold bars worth approximately $1.7 million, more than $23,000 in cash, as well as several bank accounts and crypto wallets containing roughly a combined $1.2 million. All of these proceeds are thought to stem from the funds generated by Dream Market and the various fees it charged for transactions and sellers to list their illicit wares. Dream Market operated between 2013 and 2019 and benefited greatly from the Alphabay and Hansa seizures, scooping up their users after playing second fiddle to both platforms for much of their respective reigns. According to US Attorney Theodore Hertzberg, at its peak, Dream had around 100,000 concurrent listings, most of which were for drugs. The US said the market was responsible for the trafficking of huge quantities of illegal narcotics, including more than 90kg of heroin, 450kg of cocaine, 25kg of crack cocaine, 45kg of methamphetamine, 13kg of oxycodone, and 36kg of fentanyl. “Andresen allegedly channeled commissions earned from selling illegal drugs, stolen personally identifiable information, counterfeit identification documents, and other items through cryptocurrency wallets and even converted his ill-gotten gains into gold bars,” said US Attorney Hertzberg. “Thanks to the close coordination between federal and German law enforcement, Andresen and his co-conspirators will no longer profit from the online sales of narcotics and fraud services, and Andresen will be prosecuted in both Germany and the United States as a result of his actions.” Andresen faces 12 federal charges - six counts each of international and domestic concealment money laundering - each carrying a maximum 20-year sentence. German authorities also charged Andresen with “several” counts of domestic money laundering, with each charge carrying a maximum five-year prison stint. ®
Categories: Linux fréttir

UK government prescribes Single Patient Record for NHS data chaos

Thu, 2026-05-14 11:04
The UK government has confirmed plans for a Single Patient Record (SPR), a major overhaul of NHS health data management that could involve the service's controversial Palantir-run Federated Data Platform (FDP). In the King's Speech yesterday, the Labour government said it would push ahead with plans to introduce the NHS Modernisation Bill in the new Parliamentary year, which is set to include legislation for the introduction of the SPR. Previous governments have found their efforts to bring together electronic patient records held by family doctors, hospitals, and other specialist services beset by technical complexity, a mind-bending web of rules and roles, and some cultural intransigence. Nonetheless, the government said its plan for the SPR would allow the NHS to "bring together patients' health and social care records into one place to improve patient safety and experience." It said patients would be able to see their own health records securely on the NHS App. The plan is to roll out the service to those receiving maternity and frailty care by 2028, with wider implementation to follow. An impact statement for the policy, published in January, said costs would encompass product development, tech, and data integration including alignment with external vendors, delivery and administration such as business case development, engagement, clinical and system input, as well as commercial costs. "The broad scope of the SPR means it will require investment to ensure that staff such as paramedics and community pharmacists have the same access to their patients' data as those working in GP surgeries and hospitals," it said. "Depending on the approach to the SPR, in order to maximize its value, activities may need to include translating the medical terminology in care records into plain English so that they can be readily understood and used by the patient, and to digitize historic patient information." While the document says the SPR could support automated triage of patients, potentially reducing variation in the service, "there are risks to delivering the Single Patient Record due to the magnitude and complexity of the program and integration with legacy systems." The impact assessment said there was a risk of reliance on a single provider and "de-facto vendor-lock." "While many clinicians would support data sharing for the purposes of improving care, there may be a risk of clinical resistance to changes to data sharing if safeguards are perceived to be insufficient," the document said. Dr Emma Runswick, council deputy chair of doctors' union the BMA, said: "The NHS Modernisation Bill is a huge undertaking and doctors' and patients' past experience with large top-down reorganisations of the NHS have not always been a happy one. The announcement of a SPR is welcome, however it is crucial that GPs' voices are listened to in its implementation to ensure patient data remains safe and patient confidence is protected." Currently, GPs are official "controllers" of patient data under UK data protection law, although that may change with the introduction of the new SPR. NHS England is currently planning the SPR rollout. A meeting held by the soon-to-be-defunct quango last year "accepted that an appropriate data controller for SPR is necessary" and that change would require a review of the legislation. The minutes, obtained by campaign group medConfidential under the Freedom of Information Act, said: "Given SPR will be a multi-service record it would not be appropriate for GPs to act as the data controller. It was agreed that while the NHS will be the data controller/custodian, patients would expect to own their records: how this can be achieved requires further thought." In an official statement, BMA GP Committee England chair Dr Katie Bramall said: "GPC England has not been part of the discussions on what form the Single Patient Record will take, who will be granted access, the purposes for which it will be used, or which company will be contracted to operate it. "There are already existing mechanisms that allow those in secondary care to view the live GP record, and therefore, the Government needs to explain why an additional system is needed. Until the security of any data flows can be guaranteed, and full patient-facing audit trails are made available via the NHS App showing who has accessed confidential medical data and why, we remain concerned. "We also remind patients that they can exercise their right to opt out of secondary uses of their confidential medical data by visiting the NHS website." The NHS England Data and Digital Technology Committee also heard that the NHS was considering using existing electronic patient record (EPR) systems and/or a role for the controversial Federated Data Platform, run by US spy-tech firm Palantir, in building the SPR solution. Sam Smith, medConfidential coordinator, told The Register that the FDP/Palantir arrangement – which has been the focus of fierce criticism in Parliament recently – is likely to have a role either way. "Either there's going to be a new data store – which will be in Palantir – or there'll be infrastructure for bringing various APIs together, where you make a single call and you get back a summary of the patient's record. The system doing that will be the FDP. [NHS England] has not publicly decided what they're going to do, in practice. They'll probably do the API thing first, and if they don't get everything they wanted, they will eventually take a copy of the data." The government has backed its ambitions for NHS technology with a promised £10 billion in investment. But nationally led digital transformation in the NHS has failed in the past. The ambitious National Programme for IT (NPfIT), launched by the Blair Labour government in 2003, had a budget estimated at £12.7 billion ($17.2 billion). Although NPfIT introduced a number of new technologies, it fell short of introducing electronic health records throughout the NHS. The National Audit Office said it did not represent value for money, and in 2020 it warned there was a lack of systematic learning from past failures in NHS digital transformation. ®
Categories: Linux fréttir

Dirty Frag gets a sequel as Fragnesia hands Linux attackers root-level access

Thu, 2026-05-14 10:01
Linux admins hoping Dirty Frag was a one-off horror from the kernel networking stack are about to have a considerably worse week. Researchers at Wiz have published an analysis of "Fragnesia," a Linux kernel local privilege escalation flaw discovered by William Bowling of the V12 security team that allows unprivileged users to gain root by corrupting page cache memory. The bug, tracked as CVE-2026-46300, has public proof-of-concept exploit code documented by V12 on GitHub that demonstrates the vulnerability being used against /usr/bin/su to spawn a root shell. According to Google-owned Wiz, the flaw sits in the Linux kernel's XFRM subsystem, specifically ESP-in-TCP processing tied to IPsec support. By carefully triggering the bug, attackers can modify protected file data in memory without changing the original files stored on disk. Wiz describes Fragnesia as part of the broader "Dirty Frag" bug family rather than a completely separate class of issue. Dirty Frag itself only surfaced days ago and was already attracting attention thanks to public exploit code, incomplete patch coverage, and unusually reliable privilege escalation. According to researcher Hyunwoo Kim, who uncovered Dirty Frag, "Fragnesia" emerged as an unintended side effect of patches shipped to fix the original Dirty Frag vulnerabilities, adding yet another entry to the long tradition of security fixes accidentally creating new security problems. As The Register previously reported, Dirty Frag followed hot on the heels of Copy Fail, another Linux kernel privilege escalation flaw that abused page cache handling to overwrite supposedly read-only files. Historically, local Linux privilege escalation bugs had a reputation for being unreliable, crash-prone, or fiddly enough that attackers needed good timing and a fair bit of luck to pull them off cleanly. Fragnesia looks different, as Wiz and V12 both say the exploit avoids race conditions entirely, making it far more predictable than older Linux root exploits like Dirty COW. That makes the bug much more useful after an initial compromise. An attacker who gains access to a system through phishing, stolen credentials, or a vulnerable cloud workload suddenly has a cleaner path to full root access. The V12 proof-of-concept repository is already public, while Linux vendors have started pushing out advisories and mitigation guidance. AlmaLinux warned that all supported releases are affected and urged administrators to patch quickly or disable unused ESP-related functionality where possible. Similar advisories have also been issued by Amazon Linux, CloudLinux, Debian, Gentoo, Red Hat Enterprise Linux, SUSE, and Ubuntu as distributors scramble to assess exposure across supported kernel versions. Microsoft also urged organizations to patch quickly, noting that though it had not observed in-the-wild exploitation so far, Fragnesia "can modify any file readable by the user, including [/]etc[/]passwd." The Linux networking stack is starting to look less like infrastructure and more like a root exploit vending machine. ®
Categories: Linux fréttir

Calling the cops just got extra AI as police seek to add tech to contact systems

Thu, 2026-05-14 09:15
Police forces across England, Wales and Northern Ireland will add personalization and artificial intelligence (AI) to their jointly run digital contact systems through a £72 million contract to manage and develop these. Almost all police forces in the three nations use the Digital Public Contact’s Single Online Home web platform for their own websites, with the platform also running Police.uk, a national information site, and Data.police.uk, which provides information on police-recorded crime. The Metropolitan Police Service (MPS), which hosts Digital Public Contact services on behalf of the National Police Chiefs Council, hopes to find a single supplier for these under a new contract running from July 2027 to December 2029, with a possible three-year extension, according to a market engagement procurement notice published on 12 May. Existing Digital Public Contact services include the Single Online Home websites, linked services that pass information on crimes and incidents from the public to relevant officers; and the National My Police Portal, a new service using GOV.UK’s One Login to links victims with officers in charge of cases, which South Yorkshire Police started using in January. The new contract will also cover use of AI. In March West Yorkshire Police and Digital Public Contact started using AI to extract material from old control room calls, which at present are normally recorded but not transcribed. In the procurement notice, the MPS said that AI could also be used in reporting, analysis, conversational interactions and staff assistance. In a speech on the development of Digital Public Contact last October, Cambridgeshire’s chief constable Simon Megicks said that the work also includes developing a natural language switchboard that can help direct incoming calls and live services to assist operators, which is being piloted by Humberside Police. “It supports call handlers in real time, and as they converse, the AI listens in and conducts live database searches, surfacing relevant information instantly,” he said of the assistance service at a National Police Chiefs Council innovation event. “Operators are empowered to make better decisions, quicker: reducing risk and improving outcomes for the public.” In the King’s Speech on 13 May the government confirmed plans to merge forces in England and Wales and establish a National Police Service. The procurement notice says that the new contract will provide “a robust foundation” supporting these structural changes, although they are likely to take place beyond the end of the contract. Following a market engagement event on 9 June, the MPS plans to publish a tender notice for the work around the end of July. ®
Categories: Linux fréttir

Bedrock and a hard place: Claude adventure leaves AWS user staring down $30K invoice

Thu, 2026-05-14 08:30
The world of AI is exciting, but there are plenty of expensive pitfalls ready to catch out the unwary, as one Register reader found when taking Anthropic's Claude Opus for a spin courtesy of Amazon Bedrock. Our reader managed to run up Bedrock charges totaling $30,141.33 in April 2026, despite using AWS Cost Anomaly Detection (CAD) to avoid any nasty surprises. Thirty-three days before our reader's first use of Bedrock, the threshold in CAD was set to "Absolute ≥ $100 AND Relative ≥ 40%" so alerts should have fired if things got too spendy. As for which services to monitor, our reader chose "AWS Services," which Amazon says "tracks all AWS services automatically." Except it apparently doesn't, at least not in the way our reader expected. The problem is that AWS Marketplace isn't supported by CAD, so costs incurred wouldn't trigger an alert. And how are Anthropic Claude models billed? Through the AWS Marketplace. After burning through our reader's AWS Activate credits (totaling $8,026.54 in this case), Amazon started charging for model inference on the Bedrock Marketplace, racking up $30,141.33, plus another $675.07 in AWS infrastructure charges, without a peep from the CAD service. "The credits masking made it worse," our reader told us. "AWS Activate credits did cover the first ~$8k of charges, which meant the Marketplace billing was silently working for weeks before the credits ran out. There was no notification when credits were exhausted – the charges simply started accumulating as invoiced amounts." The first warning that things were mounting up came in the form of a surprisingly large invoice. Corey Quinn, a cloud economist at the Duckbill Group and occasional contributor to this publication, told The Register: "It's unintuitive that Bedrock model spend is Marketplace unless you're entirely too familiar with AWS." Quinn told us he does most of his Claude inference directly with Anthropic to take advantage of the company's real-time billing, alerts, cutoffs, per-key limits, and so on. The approach has avoided some potentially expensive mistakes. As far as AWS is concerned, the lack of CAD support for AWS Marketplace charges makes it all too easy to run up a big bill without realizing it, particularly when it comes to AI usage. This could be regarded as a cautionary tale. If one digs deeply enough into the AWS documentation on CAD, there is a line that warns that AWS Marketplace is an unsupported service. However, it isn't clear that Claude on Bedrock is billed through the AWS Marketplace. The fact that Marketplace billing bypasses the monitoring tools compounds the issue, and could easily leave a customer getting an unpleasant surprise at invoice time. An AWS spokesperson told The Register: "AWS offers multiple tools to help customers manage spend, including AWS Budgets, which covers Amazon Bedrock spend on AWS Marketplace and other services. As noted in our documentation, AWS Marketplace charges are not currently supported by Cost Anomaly Detection. Customers with questions should reach out to AWS Support." ®
Categories: Linux fréttir

To gain root access at this company, all an intruder had to do was ask nicely

Thu, 2026-05-14 07:00
PWNED Welcome once again to PWNED, the column where we help you prepare for security success by studying others’ embarrassing failures. Today’s terrible tale involves individuals trying to do right by a company executive by letting their guard down, never a smart move. Have a story about someone leaving a gaping hole in their network? Share it with us at pwned@sitpub.com. Anonymity is available upon request. Our sad story comes from Brandon Dixon, who currently serves as CTO and co-founder of AI security firm Ent. In a prior life, however, Dixon was a penetration tester for hire and he saw some things that made all my remaining hairs stand on end just hearing about them. During one pentesting assignment, Dixon tried to find out how easy it would be to steal someone’s account using social engineering. The answer: barely an inconvenience. Dixon telephoned IT security and pretended that he was the head of security who had lost his password. When they asked him challenge questions, he said he had forgotten the answers to those also. Then he gave them the password he wanted to use over the phone and they did a reset for him. After that, he was able to get into the network and do whatever he wanted there. There’s so much that’s obviously wrong here that it’s hard to know where to begin with our lesson-taking. The IT support agents should not have taken Dixon’s word that he was the security manager, especially after he failed challenge questions, and should have denied his request to reset the password. They were probably thinking “this guy is an executive and we don’t want to piss him off” rather than “we have procedures that everyone must follow.” The other problem here is that the IT department entered Dixon’s suggested password for him over the phone. First of all, the IT department should have sent a password reset to the real employee’s email or phone number. Second of all, it’s piss-poor security for anyone to know a user’s password other than the user themselves. And I say this as someone who used to work for a company where, if you had a problem, the IT support people would ask for your password via chat. Dixon also shared another story about social engineering from a time when he consulted for a pharmaceutical company. Members of the competition would call sales and marketing reps, pretend they were coworkers, and then extract information about upcoming drugs. This would allow competitors to know what was coming and how to respond to it. To help solve the problem, Dixon instituted a system where real employees had to give a secret password at the beginning of a conversation. “I built a system called 'Chal-Resp,' short for 'challenge-response,' that generated work pairings so a user could validate they were speaking with an actual employee,” he told The Register. “The caller would need to say the word and the end-user would need to respond with the proper challenge; only employees had access.” What both of Dixon’s stories have in common is the proof that humans are eager to please and be helpful. But suspicion is the whole root of infosec, so it behooves us all to be a little less helpful to strangers in the workplace. ®
Categories: Linux fréttir

Pages