TheRegister
Vietnam to develop domestic cloud so it can ditch risky overseas operators for government workloads
Vietnam has decided to develop its own cloud platform, so its government agencies can stop using foreign-owned services. Prime Minister Le Minh Hung last week announced the plan in Decision 808/QD-TTg, which lists 20 strategic technologies Vietnam wants to develop to improve its technological self-reliance and give its government the tools to tackle national challenges. Developing a national cloud computing platform is number 13 on the list. Machine translation of Decision 808 yields the following goals for the project: “Ensuring national data sovereignty and cybersecurity for the digital government and key digital economic infrastructures; forming a centralized, secure, and reliable digital and data infrastructure to serve national digital transformation; gradually replacing foreign cloud services in state agencies, reducing the risk of data leaks and breaches of state secrets.” The move is a sign that Vietnam’s government, like many others, fears entanglements with cloud providers that may struggle to escape edicts from their home jurisdictions. Yet major hyperscalers Microsoft, Google, and Tencent Cloud are yet to build facilities in Vietnam. AWS will bring one of its lightweight Local Zones to Hanoi, Alibaba Cloud intends to build a datacenter, and Huawei Cloud has expressed interest in doing likewise. Vietnam’s government wants more love from hyperscalers – the nation’s Deputy PM recently met with AWS officials and called for greater co-operation. Yet any Vietnamese government workloads currently operating in a major hyperscaler violate the nation’s own laws that require local storage of personal information! Other technologies Vietnam wants to develop include a large-scale Vietnamese language model, virtual assistants, and AI to power applications including cameras, credit risk management, and something that translates as “a national smart education platform applying controlled AI.” The nation also wants its own next-generation firewall; anti-malware software, a next-generation SIEM system, and an “AI-integrated security operations center platform.” Quantum-resistant encryption also makes the list, as does a “user and entity behavior analysis system.” Rare earth processing is another capability Vietnam desires, as are 5G expertise, the ability to build and operate autonomous and industrial robots, and improved semiconductor design skills. Vietnam is in a hurry: Decision 808 set a 2030 deadline to get this all done. According to a Tuesday post to a government news platform, 2030 is also the year in which Hanoi expects all core government services will be online, and digital infrastructure enables outcomes such as “Ensuring social welfare and supporting crime prevention and control, national security, and social order and safety” plus “Supporting scientific research and innovation.” And in 2035, Vietnam “will become a developed digital nation” in which “National databases, with population data serving as the core, will be interconnected, shared, and effectively utilized to support the development of a smart government, enabling data-driven decision-making based on real-time information.” Smart government will mean “Citizens will benefit from personalized, automated, and convenient digital services tailored to different life events.” What a time to be alive. ®
Categories: Linux fréttir
Execs admit AI makes them value human workers less
Executives have leaned in to AI, only to stumble before reaching any return on their investment. "Most AI spending has under-delivered, leaving execs feeling like they’re burning cash," says employment biz G-P (Globalization Partners) in its third annual AI at Work Report. The report finds corporate leaders' enthusiasm for AI waning as ROI proves elusive. Sixteen percent of companies saw a negative ROI from AI investments last year, and 73 percent of executives whose AI efforts did pay off said ROI fell short of expectations, according to the report. These findings are based on a survey of 2,850 executives (VP level and up) in the US, Germany, Singapore, Australia, and France, including a separate set of 500 US HR professionals. The AI at Work Report is a little cheerier than last year's findings from MIT NANDA researchers who discovered only five percent of organizations have managed to successfully put AI projects into production. Regardless, execs anticipate scaling back their AI budgets if organizational goals aren't met this year. Beyond their worries about financial benefits, corporate execs in the G-P survey have doubts about the reliability of AI, a concern borne out by recent Microsoft research. Only 23 percent of the G-P respondents said they have total confidence in AI accuracy. Those concerns mean 69 percent said they spend more time monitoring and reviewing AI, while 61 percent expressed concerns about using AI to craft sensitive documents because they doubt the output is legally accurate. Moral unease doesn't appear to be doing much to help corporate leaders empathize with workers, however. The survey found that "82 percent of executives admit AI has lowered the value they place on human employees." In fact, these leaders appear to have become somewhat suspicious of their people – about 88 percent expressed concern that employees are using AI performatively rather than adding business value. But among such misanthropic, skeptical managers, there's enough lingering humanity to ensure that only 12 percent strongly agree "that sacrificing employee privacy for AI monitoring is worth it to reach business goals." Despite the sense that AI has reduced how human workers are valued, about half of execs still cite the scarcity of employees with AI skills and the lack of data literacy as barriers to their AI goals. You still need human talent to stand up money-losing AI projects. ®
Categories: Linux fréttir
Doozy of a Patch Tuesday includes 30 critical Microsoft CVEs
Microsoft released fixes for 137 CVEs on Tuesday, none of which are known to have been targeted by attackers. But the news is not all good as Redmond rated a whopping 30 flaws as critical, with 14 earning a 9.0 or higher CVSS severity rating, including one perfect 10. Plus, everyone who celebrates the monthly patchapalooza event received validation for what we all widely suspected last month: Yes, Redmond (and everyone else, for that matter) is using AI to find a ton more bugs than ever before. And that means a lot more work for all the folks applying and testing the patches. “This month's release sits on the larger side of a hotpatch month, and we expect releases to continue trending larger for some time,” Tom Gallagher, VP of engineering at Microsoft Security Response Center, said in a note on this month's Patch Tuesday. Microsoft also said its secret-until-now AI bug hunting system, codenamed MDASH, found 16 of the vulnerabilities addressed in this month’s release. Redmond additionally announced it is making the tool available to a limited number of customers in private preview, along the lines of Anthropic’s Mythos and Project Glasswing. In other words: no break for Microsoft admins this May Patch Tuesday. Let’s take a look at some of the nastiest/most-interesting bugs that also received some of the highest-CVSS ratings this month, coming in hot at 9.8 and 9.9. First up: CVE-2026-41096. This one is a critical, 9.8-rated Windows DNS Client remote code execution (RCE), and while Redmond says exploitation is “unlikely,” we’d suggest patching it ASAP. It’s due to a heap-based buffer overflow, and no authentication or user interaction is needed to exploit it (it's done by sending a specially crafted DNS response to a vulnerable system), potentially leading to memory corruption and RCE. “Since the DNS Client runs on virtually every Windows machine, the attack surface is enormous,” Zero Day Initiative bug hunting boss Dustin Childs warned. “An attacker with a position to influence DNS responses (MitM, rogue server) could achieve unauthenticated RCE across your enterprise.” Plus, it could happen across a ton of enterprise systems very rapidly, Jack Bicer, Action1 vulnerability research director told The Register. “This CVE requires immediate attention,” he said. “Successful attacks may lead to widespread endpoint compromise, ransomware deployment, credential harvesting, and operational disruption across corporate networks.” Another especially bad bug, CVE-2026-42898 in Microsoft Dynamics 365 on-premises systems, achieved a near-perfect 9.9 CVSS rating and also leads to RCE. Any authenticated user can trigger this vuln - it doesn’t require admin or other elevated privileges. As Redmond explains: “An attacker with the required permissions could modify the saved state of a process session in Dynamics CRM and trigger the system to process that data, which could result in the server unintentionally executing malicious code.” Since exploitation could lead to a scope change, meaning the bug can affect systems beyond the vulnerable component, it’s a pretty serious risk to enterprises and should be prioritized. “Scope changes are pretty rare, so if you’re running Dynamics 365 On-Prem, definitely test and deploy this patch quickly,” Childs said. The second of two 9.8-rated bugs is CVE-2026-41089. It’s a stack-based buffer overflow in Windows Netlogon that allows an unauthenticated, remote attacker to execute code on vulnerable machines by sending a specially crafted network request to a Windows server acting as a domain controller. As Childs points out: the fact attackers can exploit this flaw without credentials or user interactions makes it wormable “This is the highest-impact bug that requires immediate patching: a compromised domain controller is a compromised domain,” he added. The silver lining this month for defenders is that the single CVE earning a perfect 10.0 CVSS rating is in Azure DevOps, and doesn’t require users to fix anything. CVE-2026-42826 is an information disclosure vulnerability in the DevOps toolchain “has already been fully mitigated by Microsoft,” according to Redmond. “There is no action for users of this service to take. The purpose of this CVE is to provide further transparency.” ®
Categories: Linux fréttir
Google users fight for refunds as unauthorized API usage bills soar
EXCLUSIVE Several Google Cloud customers say their API keys have been compromised and used by bad actors to run inferencing workloads using the most expensive video and picture models, leaving them with bills for tens of thousands of dollars and weeks of back-and-forth headaches with the Chocolate Factory as they tried to prove they were not responsible for the mess. The problem is being hashed out on social media, with sites like Reddit collecting stories from Google Cloud users that seem to follow a similar pattern: After months or years paying small monthly bills to Google Cloud for access to tools like Maps, their API keys are discovered, and in minutes they are charged thousands of dollars for API calls to Nano Banana and Veo 3. Google told The Register this is an industry-wide problem and not a security issue specific to Google. It said the vast majority of these incidents happen due to compromised user credentials such as API keys inadvertently leaked on public code repositories like GitHub, and malicious actors who are actively scraping public repositories. Google said it encourages all customers to implement robust security practices, including enabling multi-factor authentication, routinely auditing API keys, and ensuring credentials are never committed to public repositories. But those explanations are complicated by developers and security threat researchers who said there are thousands of accounts which are following Google's own site configuration rules by placing their APIs in a public client. Additionally, one user told The Register they had spending caps in place that should have stopped any bill over $250. Yet according to Google those caps can be automatically upgraded to $100,000 – without user input – if the user has spent a total of $1,000 throughout the life of the account, and the account is more than a month old. 'What the hell's going on?' Rod Danan is CEO of Prentus, a company that helps job applicants with interview preparation and tracks job placements for universities. He uses API calls to Google Maps as a part of his platform. For years his bill never topped $50 a month, he told The Register. Then in March he got an email alert from Google saying he was being charged $3,000 and panic took hold. “It’s just ‘Boom, we just charged you $3,000.’ I'm like, ‘What the hell's going on?’ And then you go into the application, like, ‘What is triggering this? What is the source?’ So just determining that is honestly not that simple,” he told The Register. “As I'm searching, five minutes go by and another $5,000 get charged. I’m like ‘What the hell is going on? It's just draining my money.’ ” Despite the spending caps he said he had in place, by the time he shut down the API minutes later, his credit card had been charged $10,138 almost entirely from Veo 3 video generation and Gemini image output tokens, which are services he has never used and have zero connection to his product. Google told him it found no evidence of fraud and has thus far refused to issue a refund. But what makes this especially frustrating for Danan is that he said he was following Google’s advice in exposing the API key in the first place. “You have this Google Maps key, which you know, everyone uses, and the guidance from Google is you're supposed to load it in your front end. So we did that, and all of a sudden they changed the keys so that the Google Maps key, which is exposed publicly, could be used for Gemini, and then they didn't disclose that to customers,” he said. “So then, all of a sudden, I just get multiple emails in a row. It's like $3,000, $5,000, $10,000 charged on your Google account.” In February, security researchers at Truffle Security Co. published an article warning Google users that their Maps API keys were no longer safe to share publicly. For years, if a coffee shop wanted to place its logo and website on Google Maps, the instructions from Google were to download the widget and upload an API key that linked their site to Google Maps, said Joe Leon, the threat researcher who wrote the warning. He told The Register that about three years ago, Google started allowing those same public API keys to also access Google Gemini models. “You have all these people that we’re told to like for Maps, ‘Put this key in public." Now maybe it's them, maybe it's someone else in their organization, someone enabled the Gemini API in that same project,” he told The Register. “Now that same key can be used to both access Maps, and also Gemini. That’s the core of what I found.” He said the first few characters of those API keys followed a particular naming convention: A-I-Z-A. A search of millions of web pages found 3,000 of those Google keys that were first deployed for Maps and are now able to access Gemini, leaving those sites vulnerable to high-dollar credential attacks. In an email to The Register, Google said it tells users not use the same API key for multiple APIs, and especially through API keys that could be client-facing (browser keys). It recommends to always apply API client restrictions – for example, to restrict the API key to a specific service and apply client application restrictions like “HTTP referrer”, “IP address” , “Android apps.” Google said it now mandates that users configure API restrictions when they create API keys. Additionally, the company said, it's no longer possible to create a key that can access both Gemini and Maps. Leon agrees that Google has taken steps to lock down access since his paper was published. “The first thing that I’ve seen is they’ve rolled out a new Gemini API key type, which is unrelated, as best I can tell, to the Google API key. So it’s prefixed with capital ‘A,’ capital ‘Q’ ” he said. “Since I published that post, they’ve taken a lot of steps to try to lock this down. The spending caps I saw, they put that in place. I didn’t know that they auto increase it. So that kind of defeats a little bit of the purpose.” About those spending caps Developer Isuru Fonseka, based in Sydney, Australia has been building apps in the Google Cloud environment for 10 years. He's got a side project he has been working on for about two years, but says he's never exposed the API key that he uses to access his work inside Firebase. Additionally, he set a hard budget cap at $250. Like Danan, he was alerted to a sudden spending spike with Google on April 29. The attack was so out of character with his purchase history that his credit card company refused the charges. “I just woke up to a couple of emails where my credit card provider declined a number of transactions,” he said. “So then I logged into GCP to have a look. When I look into transactions, I can see that all these charges are coming through. Some are declined, but previously, there’s like, one for $500, $1,000, or $2,000. These ones went through successfully.” He reached Google support to flag the spending, ask them what had caused it, and to shut it down, but it takes up to 36 hours for Google support technicians to be able to view a customer's usage. Google told The Register this is actually faster than industry standard, but for Fonseka, it was still infuriating. “This was probably the most frustrating part,” he said. “There’s this weird mechanism where they can detect enough to charge your card, but not enough to show you what it is being used on … The damage ended up being in the range of like AUD $17,000 ($12,000) .” But Fonseka said even if someone were to brute-force his API key, his Google Cloud budget cap was set at Tier 1, which was locked at $250, meaning he should never have been able to spend AUD$17,000 on AI services. “But when I logged in after the attack, it was set to like Tier 2 or Tier 3, which was like $100,000. I would have never set this,” he said. “I spoke to someone actually in Australia who was also affected by this, and he said that, based on your account standing they automatically upgrade the tier. So if they did, that is just a terrible decision, so they must have automatically upgraded mine.” Google told The Register it looks like Fonseka might be right. “What we believe happened in this instance you have shared is the attacker didn't change the tier; the developer’s usage (driven by the attacker) triggered Google’s automated systems to raise the ceiling, based on meeting Tier 3 qualification of Gemini API, which included at least $1,000 USD in payments to Cloud and 30 days since the first payment,” Google told The Register via email. In a revamped policy move announced March 16 Google said it would make it easier for users to access higher dollar quotas in GCP by reducing the spending qualifications to reach the next tiers. Additionally, the system “automatically upgrades you to the next tier as your usage grows.” “You get access to higher rate limits and increased monthly quota as soon as the criteria is met,” Google said on its blog titled “Giving you more transparency and control over your Gemini API costs” Customers like Fonseka in the first tier would be automatically moved to the next tier – $2,000 – if they spend $100, and then automatically to Tier 3 if they spend $1,000 and have been a customer for 30 days. Tier 3 has a spending cap between $20,000 and $100,000. Fonseka said he was tempted to call his credit card company and have them charge back the cost, but he fears that would likely result in the suspension of his project inside Google Cloud, which customers are relying upon. Danan told The Register that he is in the same boat. “Even though I had spend caps on it didn't really matter, like, all you get is alerts,” he said. “I still need Google APIs. I can't get kicked off because then my app won't work. We need the Maps API. So there's sort of a disincentive for you to report this is fraudulent activity to your credit card company.” Both Danan and Fonseka said they are still negotiating with Google to win a refund. ®
Categories: Linux fréttir
Foxconn confirms cyberattack after ransomware crew claims it stole confidential Apple, Nvidia files
Foxconn, a critical supplier for major hardware companies like Apple and Nvidia, on Tuesday confirmed a cyberattack affecting its North American operations after the Nitrogen ransomware gang listed the electronics manufacturer on its data leak site. “Some of Foxconn's factories in North America suffered a cyberattack,” a Foxconn spokesperson told The Register. “The cybersecurity team immediately activated the response mechanism and implemented multiple operational measures to ensure the continuity of production and delivery. The affected factories are currently resuming normal production.” Nitrogen ransomware criminals on Monday claimed to have breached the Taiwan-based company and stolen 8 TB of data comprising more than 11 million files. The miscreants say the leaks include confidential instructions, internal project documentation, and technical drawings related to projects at Intel, Apple, Google, Dell, and Nvidia, among others. Foxconn declined to confirm that these - or any - customers’ information was hoovered up in the digital intrusion. Nitrogen, which has been around since 2023, is believed to be one of the various ransomware offshoots that borrowed code from the leaked Conti 2 builder. And, in what may be very bad news for its latest victim, even paying the ransom demand may not guarantee recovery of encrypted files. In February, Coveware researchers warned that a programming error prevents the gang's decryptor from recovering victims' files, so paying up is futile. The finding specifically concerns the group's malware that targets VMware ESXi. This isn’t the first time Foxconn has been targeted by ransomware gangs. In 2024, LockBit claimed to have infected Foxsemicon Integrated Technology, a semiconductor equipment manufacturer within the Foxconn Technology Group. The same criminal crew also hit a Foxconn subsidiary in Mexico in 2022. ®
Categories: Linux fréttir
Google launches line of Android laptops festooned with Gemini AI
Google is rolling out a new line of laptops based on Android instead of ChromeOS, and using the opportunity to try and move upmarket from the budget-conscious Chromebooks – while also baking AI into every fissure of the system. The new line of so-called Googlebooks seems even more obtrusive about pushing embedded AI than Windows 11 embedding Copilot into everything. With the OS on Googlebooks, which the company touts as the best of Chrome OS and Android, even moving the cursor over an on-screen task such as the text of an email nags you to offload work to Gemini. Google was has been publicly planning to merge Android and ChromeOS for a while, with Android boss Sameer Sama saying last year that the Android codebase would be the core of the new platform. This gives the company a chance to break into the premium laptop market, using one of its core assets, the Android ecosystem, to differentiate from the kid-friendly and budget-oriented Chromebook lineup. While the laptops won't be coming until later this year, we can already see from the press materials and video demo that this new kind of notebook is meant to out-Copilot Microsoft. One of the main features demoed, Magic Pointer, activates when you wiggle the cursor and shows you contextual suggestions based on what you hover over. For example, in the video, Alexander Kuscher, Senior Director of Laptops and Tablets at Google, showed how hovering over the date in an email brought up options to view his schedule, craft a reply saying "I'm in town on May 19," or even use Google maps to suggest meetup spots. Having AI crammed into Windows Notepad seems quaint by comparison. Kuscher also showed how dragging images on a Googlebook can combine them. He dragged a photo of a nursery onto an image of a swath of wallpaper and a picture of a crib and the system generated a picture of the nursery with the crib and the wallpaper included. The Google exec pointed out that an act like combining photos normally involves logging into a chatbot, uploading the photos, and giving it a prompt. Here it was just drag and drop. No word on whether the system can use your photos as training data. Android apps will also work on Googlebooks, and users will also be able to launch them from the phones, much like Apple's iPhone Mirroring. In the demo, Kuscher showed Duolingo running in a portrait-shaped window on the desktop operating system as if it were on his phone. Google said that Googlebooks are being "built with premium craftsmanship and materials” by partners like Acer, ASUS, Dell, HP, and Lenovo. They also sport a Google-colored glowbar on the cover so everyone knows who owns your digital soul. Considering the RAM shortage and the fact that IDC expects PC shipments to decline by 11.3 percent in 2026, Google has picked a challenging time to come out with a whole new category of laptop. While the company has not released pricing, we can only imagine that Googlebooks will be significantly more expensive than Chromebooks, which are currently in the $200 to $500 range in the US. These new notebooks are likely to compete with premium consumer Windows and macOS laptops at a time when demand is declining and people are holding onto old devices longer. We see no evidence that Google is even targeting businesses and we doubt IT departments would be interested in the features the company has focused on. Google also announced the expansion of Gemini Intelligence onto high-end Android devices (i.e., Samsung Galaxy and Google Pixel devices) as part of Tuesday’s I/O preview, noting that it’s designed “to help your phone handle boring tasks for you.” Google provides examples like filling out online forms, summarizing websites, and even rewriting voice-to-text messages to get rid of pauses and other natural speech patterns that detract from the written word. Speaking of Chromebooks, we asked Google what will become of its budget hardware line with the release of the Googlebook, but we didn’t hear back. We imagine that they will probably continue to serve the educational market for some time. Google made several other announcements during Tuesday's presentation, including a new Pause Point feature in the upcoming Android 17 that follows in Apple’s steps by protecting you from your own worst instincts to scroll endlessly or waste half your day playing chess on your phone. It allows you to mark certain apps as "distracting" so that when you launch them, the phone asks you to take a deep breath and reconsider your actions, which is something Apple’s mindfulness app doesn’t do. To the bane of everyone tired of social media reaction videos, Google is also baking the format right into Android with Screen Reactions that will allow users to capture video of their device screen along with sticking themselves in the lower corner so they can regale everyone with their opinion about whatever they’re talking over. ®
Categories: Linux fréttir
Hollywood A-listers back proposed standard that would pay them when AI uses their likeness or work
AI models can take your written work, they can take your voice, and they can even take your likeness to use for training material and for creating content that looks exactly like it came from you. Now, some actors are promoting a new licensing spec designed to protect their famous faces and yours too. The newly formed public benefit non-profit is extending the Really Simple Licensing (RSL) spec developed by the RSL Internet Collective with the draft RSL Media Human Consent Standard (RSL-MEDIA) 1.0, which aims to cover creative works as well as people's names, likenesses, voices, and other identity attributes. The initial launch allows people to sign up and reserve an identifier that will serve as a key to structured data entered into the RSL Media public registry, scheduled to launch next month. The registry will allow people to verify their identities, set permissions governing the use of their works and likeness, encode those permissions for machine consumption, and verify that AI systems are checking declared permissions. Whether there will be any legal consequences for AI services that ignore registry settings remains to be seen. The data broker industry in the US hasn't exactly suffered due to the notional existence of "privacy rights." And public concern about non-consensual AI nudification and explicit deepfakes hasn't really put an end to that form of technological abuse or punished the social media sites distributing it. But this time, Hollywood has shown up. "AI technologies are expanding rampantly, essentially unchecked and unregulated," said celebrated actress and RSL Media co-founder Cate Blanchett, in a statement. "In order for humans to remain in front of these technologies, consent must be the first consideration. RSL Media is a simple, effective and free solutions-based technology for facilitating and activating consent. It’s also the industry’s first practical solution where people everywhere, not just public figures, can assert control over how their work is used by AI." Nikki Hexum, co-founder and CEO of RSL Media, said, "AI can’t respect rights it can’t see, and this means human consent is virtually invisible in this new digital era. The right to decide whether AI can use your work or identity should not be reserved for only those who can afford lawyers or have platforms big enough to be heard, it is a basic human right." That's not entirely correct. Rights do not need to be seen to be respected; due diligence prior to using material that may be copyrighted is expected. Ignorance of copyright does not excuse infringement, even if it might mitigate potential liability. AI model makers could have chosen to respect rights by default, by seeking permission to use data for training. They could have chosen to seek permission to crawl websites and could have heeded existing signals to crawlers like the Robots Exclusion Protocol. They could have chosen to abide by the requirements of open source software licenses in harvested code. They did not do so, because Silicon Valley prefers to ask forgiveness rather than seek permission. Permission is expensive; there wouldn't be much of an AI industry if that were the norm. The law may be one of the things broken by those applying Meta's shelved mantra "move fast and break things." So far, industry disinterest in seeking permission has worked well – AI companies have been held to account in only a few of the hundred-plus lawsuits objecting to AI content capture. The underlying RSL standard is slowly gaining adoption. The RSL Collective says more than 1,500 media organizations, brands, technology companies, and standards groups now support it following the launch of RSL 1.0 last December and the relevant RSL XML file can be seen at sites like The Guardian. While it's unclear what impact the RSL has had on AI biz behavior, extending the RSL to cover personal identity with the RSL-MEDIA standard may stir broader interest in AI rules and their enforcement. Or it may just affirm the XKCD comic about specifications and how they proliferate. There are already several similar protocols: TDM AI and TDMRep, Spawning's ai.txt, AI Preferences, not to mention a few that focus solely on images and commercial offerings like Cloudflare's Pay per crawl. But RSL Media may have a leg up thanks to the involvement of high-profile celebrities like Blanchett and endorsements from similarly well-known peers. "Of course artists and cultural creatives will inevitably be involved with AI," said Dame Emma Thompson in a statement. "At the moment, however, AI is merely stealing from us all. This is an urgent and essential initiative. It's also eminently doable, so let’s do it without delay." ® Editor's note: This story was amended post-publication with clarification about the relationship between RSL Media and the RSL Internet Collective.
Categories: Linux fréttir
US Army goes green-ish, wants soldiers munching on plant proteins
Eating in the field has never been fun for US Army soldiers. And they may soon face even stranger field rations than they do today: Alternative proteins delivered in formats ranging from powders and sauces to gels and semi-solids. The Army on Monday published a sources sought announcement to gather submissions from interested industry and academic partners in the "alternative protein sector," willing to help the branch develop rations that are lighter weight, have a longer shelf life, and could potentially be produced in combat-forward environments. According to the announcement, the Army is looking for submissions covering four areas: Technologies for developing alternative proteins, like fermentation and other biomanufacturing methods, meat alternative products for ration inclusion, consumer research seeking to "enhance the acceptability … of alternative proteins within a military population,” and food samples for government taste and performance evaluations. As an added element, the Army said that it wants ration products that meet its existing “stringent requirements for nutrition, shelf stability, and palatability,” though anyone who has served in the US Army and eaten field rations may have doubts about the military branch's commitment to palatability on its Meal, Ready-to-Eat (MRE). As a US Army veteran, this vulture can attest to an unfortunate level of familiarity with MREs, circa 2002. Beef frankfurters were famously one of the worst, as was the so-called “beef steak” meal that was more like a compressed loaf of meat leavings than an actual steak. The flavor didn’t matter at the end of the day, though, when you’d just marched 15 miles carrying 75 pounds on your back: You just needed sustenance, and even that five pack of frankfurters with a taste I shudder to recall sounded good under the right circumstances. The MRE menu lineup, which has changed several times in the past 20 years, includes a few vegetarian options, and it's those that make one of the Army’s requirements for this program so surprising. Civilians might be surprised to learn how popular the non-meat meals were, even among hardcore carnivores. The four or so vegetarian options in the overall MRE lineup were always the first to go when I was in. Not only did they replace military mystery MRE meat with something more appealing to eat out of an envelope, but they were actually tasty - relatively, of course. Vegetarian MREs also tended to be slightly less calorically dense than their animal-derived counterparts, so they included extra bits that made them an even bigger hit. Whether that would translate into soldiers embracing alternative proteins in future MREs isn’t a guarantee, of course. Most weren’t choosing the veggie MREs for alignment with their personal ethics so much as that they wanted a meal that didn’t suck. The Army’s goal of developing “lightweight and nutrient-dense ration solutions to reduce logistical burdens and physical load on warfighter” through the program is definitely a noble one. MREs get heavy quickly if you’re on a long field expedition, but the openness the Army is leaving in the announcement doesn’t make it sound like appetizing solutions could be the first to come out. “Gel/semi-solid formats, dry powder mixes, [and] sauce-style components” are all on the table, with the Army saying the format of “novel ready-to-eat formats … is at the offeror’s discretion.” In other words, future ration components could include gel packs stuffed with fermented mushroom protein and other nutrients, some form of unholy shake, or whatever else food scientists can come up with. Interested parties will need to move fast, though: As a sources sought announcement, this isn’t a solicitation, includes no promise the ideas will be given a research grant or procurement dollars, and has to be in by Friday, May 15, with no assistance from the government. The submissions the Army receives could help shape future solicitations in this space, however, meaning the MRE we currently know and … love … may eventually evolve into something rather more futuristic. Hopefully it tastes a bit better. One thing that soldiers will probably be thrilled about? No bugs in whatever field rations come next. "We are specifically excluding solutions related to cell-cultured, lab-grown meat or insect protein," the Army said, though we note that's only for the purposes of this particular announcement, so tomorrow's soldiers might still be subsisting on crickets and ants. ®
Categories: Linux fréttir
FCC walks back router update ban before it bricks America's network security
America's telco regulator has seen some sense over its ban on foreign-made routers, deciding that existing devices should continue receiving software and firmware updates after all. The Federal Communications Commission (FCC) has extended waivers covering certain foreign-made routers (and drones) already operating in the US, pushing the update deadline to at least January 1, 2029. Without the extension, updates would have been blocked as early as 2027. Back in March, the FCC updated its Covered List to include all foreign-made consumer routers, prohibiting the approval of any new models. This effectively banned any new kit made in other countries from being sold, but did not prevent the import, sale, or use of existing models that had previously been authorized. The policy stems from fears that foreign-made router pose a security threat. Because they handle network traffic, they could introduce vulnerabilities exploitable against critical infrastructure, and in the words of the FCC represent "a severe cybersecurity risk that could harm Americans." Miscreants have exploited security flaws in routers to disrupt networks or steal intellectual property, and routers are implicated in the Volt, Flax, and Salt Typhoon cyberattacks. The policy was widely regarded as flawed, not just because the vast majority of consumer router kit is made outside the US or built from components sourced abroad, but because vulnerabilities and security flaws are not limited to any particular geography, and appear in products from all brands and countries of origin, as noted by the Global Electronics Association (GEA). Blocking firmware updates, which typically deliver security patches for newly discovered flaws, also seemed a peculiar own goal for a regulator whose stated motivation is reducing network vulnerability. The FCC has belatedly recognized this, stating that its policies would have "had the effect of prohibiting permissive changes to the UAS, UAS critical components, and routers added to the Covered List in December and March. "This prohibition would be in effect even for Class I and Class II permissive changes - such as software and firmware security updates that mitigate harm to US consumers - because previously authorized UAS, UAS critical components, and routers are now covered equipment." The waivers now run until at least until January 1, 2029, falling into the final month of the Trump administration, when there is a chance this may be overlooked in the preparations for Trump’s successor. The FCC extension was met with some approval. Doc McConnell, head of policy and compliance at security biz Finite State said in a supplied remark: “I strongly support the FCC’s decision to allow firmware and software updates for already-authorized routers, including covered devices already deployed in the United States.” “The biggest practical security risk with routers is not only who made them, but whether they remain patched. When they stop receiving updates, known vulnerabilities remain exposed, attackers gain durable footholds, and consumers are left with equipment they cannot realistically secure on their own. “The original restriction risked creating exactly that problem: millions of deployed routers frozen in time, unable to receive security fixes. I appreciate the FCC recognizing that preventing updates could unintentionally make Americans less safe,” he added. However, as previously reported by The Register, the FCC’s Conditional Approval framework explicitly requires vendors seeking approval for new routers to submit plans to establish or expand manufacturing in America, with quarterly progress updates. As stated by the GEA, “The policy’s logic assumes that manufacturers can and will move production to the United States.” That might be an assumption too far. ®
Categories: Linux fréttir
Congress investigates Canvas breach as company pays ransom
The US Congress has summoned education tech firm Instructure's CEO Steve Daly to the Hill to explain how digital thieves breached its Canvas online platform twice within two weeks. In a letter sent to the digital learning giant late Monday - around the same time Instructure said it had reached an “agreement” with extortion crew ShinyHunters - the US House Homeland Security Committee “requested” that Daly or a “senior representative” schedule a briefing with the committee as part of its investigation into the hacks. “The briefing should address the circumstances of both intrusions, the nature and volume of data accessed, the steps Instructure has taken and is taking to contain the threat and notify affected institutions, and the adequacy of the company’s coordination with federal law enforcement and CISA,” Homeland Security Committee Chairman Andrew Garbarino (R-NY) wrote [PDF]. “With students at more than 8,000 institutions navigating final examinations and end of semester deadlines, the disruption of a platform that Instructure itself describes as serving more than 30 million active users globally is a matter of national concern,” Garbarino said. Also late Monday, the education tech giant said it "reached an agreement with the unauthorized actor involved in this incident." Both Instructure and ShinyHunters, the cyber gang that claimed to have stolen data affecting up to 275 million students, teachers, and staff, claimed that this “agreement” involved deleting all of the stolen files. In other words: the company paid the undisclosed extortion demand prior to the Tuesday deadline, at which time ShinyHunters said they would leak all of the 8,800 colleges, universities, and K-12 schools’ records. "We received digital confirmation of data destruction (shred logs)," Instructure said, adding "We have been informed that no Instructure customers will be extorted as a result of this incident, publicly or otherwise." The Reg has learned that ShinyHunters abused XSS vulnerabilities in Canvas' Free-for-Teacher learning software, and the bugs allowed the data thieves to obtain administrative access. During the first intrusion, which Instructure detected on April 29, the extortionists claimed to have stolen about 3.6 TB of uncompressed data, including usernames, email addresses, course names, enrollment information, and messages. On May 7, the crooks broke back into Canvas’ systems via the same vulnerability and injected JavaScript containing ransom demands directly into hundreds of Canvas school login portals, causing the ed-tech firm to take the platform offline for a day - during final exams and Advanced Placement testing for many. This is the second known security incident involving ShinyHunters and Instructure in less than a year. The extortion crew also breached Instructure's Salesforce environment in September 2025. Instructure plans to hold a public webinar on Wednesday with the leadership team “to detail information about the cyber attack and our activities to harden the system,” which will be held across “multiple time zones.” ®
Categories: Linux fréttir
AirBit crypto Ponzi victims can now claim slice of $400M asset haul
The US Department of Justice has begun accepting applications from victims of the AirBit Club crypto Ponzi scheme for a slice of more than $400 million in forfeited assets tied to the fraud. The compensation fund currently lists about $150 million as available for payout. Launched in 2015, AirBit Club’s schtick was that it ostensibly offered investors guaranteed daily passive income through cryptocurrency mining and trading. It was pitched as a trustworthy multi-level marketing initiative, although prosecutors have since said it mainly preyed on “unsophisticated investors,” running conferences and expos as ways to demonstrate its legitimacy. Members were given access to an investor portal, which would display sums they wanted, and expected, to see – daily profits building as promised. However, these figures were entirely fabricated. Investors’ money was never used for cryptocurrency mining or trading; instead, prosecutors said, it was pocketed by the fraudsters behind AirBit Club and used to fund additional recruitment events across the United States, Latin America, Asia, and Eastern Europe. Of course, when investors tried to withdraw their funds, they were met with delays, fees sometimes exceeding 50 percent, or just plain old account freezes. According to a dedicated website established for the compensation scheme, victims must meet a number of criteria in order to prove their eligibility, including that they used their own money to invest, did so without willful ignorance of the scam’s illegitimacy, and that they had funds still inside AirBit Club at the time of its collapse in August 2020. Those who withdrew their funds before that time, likely incurring the huge withdrawal fees to do so, will not be eligible. “Investor euphoria over new technology is all too often fertile ground for fraudsters,” said US Attorney Jay Clayton for the Southern District of New York. “It is our job to root out those fraudsters." “Here, the defendants led a multimillion-dollar pyramid scheme based on lies about virtual currency trading and mining. They now face justice, and this outcome should deter anyone who may be tempted to target others with false promises of high returns in virtual currency investments.” Five AirBit defendants Five defendants involved in the AirBit Club scam were sentenced in 2023 after pleading guilty, including co-founders Pablo Renato Rodriguez and Gutemberg Dos Santos, who received prison terms of 12 years and 40 months, respectively, in addition to extensive forfeiture orders. Both Rodriguez and Dos Santos were previously sued by the SEC in 2017 for their roles in a separate pyramid investment scheme, Vizinova, and paid $1.7 million in penalties. Cecilia Millan and Karina Chairez were identified in court documents as senior promoters in the AirBit Club scheme. Millan was sentenced to five years in prison and three years of supervised release, while Chairez received a sentence of one year and one day in prison followed by three months of supervised release. The final member was Scott Hughes, described as the scheme’s attorney. He was sentenced to 18 months in prison and three years of supervised release after pleading guilty to laundering approximately $18 million for AirBit Club, through domestic and foreign bank accounts, as well as an attorney trust account that was reserved for handling his practice’s clients’ funds. He also helped the group erase negative articles about it from the internet. In one case, Hughes engaged a website removal company to remove 15 articles calling AirBit a scam. The group paid $3,000 for each of the 15 takedowns, court documents stated. ®
Categories: Linux fréttir
US bank reports itself after slinging customer data at 'unauthorized AI app'
A US commercial bank just tattled on itself to the Securities and Exchange Commission (SEC) for plugging a bunch of customer data into an unauthorized AI application. Community Bank, which operates in southwestern Pennsylvania, Ohio, and West Virginia, filed an 8-K with the regulator on Monday, saying it launched an investigation into the internal cockup, which remains ongoing. It felt compelled to submit the filing "due to the volume and sensitive nature of the non-public information." This included customer names, dates of birth, and Social Security numbers, but the filing provided no further detail about the incident. Community Bank did not specify what this "unauthorized AI-based software application" was or how it was used. However, the disclosure of data such as SSNs, which in the US are generally categorized among the most sensitive types of data that organizations can store on behalf of customers, is protected under several federal and state laws. One possibility is that the data was entered into a generative AI tool outside the bank's approved systems. If so, that could raise questions about whether the information was transmitted to a third-party provider and how it may have been retained or processed. The Register asked Community Bank for more details and will update this story if it responds. The bank confirmed that it suffered no operational impact and customers were not prevented from accessing their accounts or payment services as a result. "The company is evaluating the customer data that was affected and is conducting notifications as required by applicable federal and state laws and regulatory guidance," Community Bank stated in its cybersecurity disclosure. "The company has been, and continues to be, in communication with relevant banking and financial regulators regarding the incident." It also promised to continue its remediation efforts, take action to prevent future failures, and gave the "we're committed to protecting customers' data" line that always goes down so well. ®
Categories: Linux fréttir
SpaceX Starship completes Wet Dress Rehearsal, gets ready for launch
SpaceX is set to launch the third version of its Starship rocket after completing a Wet Dress Rehearsal (WDR) - a full fueling test - yesterday. It was second time lucky for Elon Musk's rocketeers, after a first attempt over the weekend was aborted. The issue cropped up before propellant was loaded. However, on Monday, the company tried again and confirmed that during the countdown (designed to check out as many activities as possible short of launching the behemoth) 5,000 metric tons (more than 11 million pounds) of propellant were loaded into the vehicles stacked on the company's new Pad 2 at its Starbase facility in Texas. NASA's Artemis II also suffered from WDR problems, although the US space agency was forced to roll the rocket stack back to the Vehicle Assembly Building for repairs. Whatever issue bedeviled SpaceX's latest Starship and its Super Heavy Booster was dealt with at the pad, and the test was successfully repeated. A launch of the latest rocket revision could therefore occur in the coming days or weeks, pending the results of the WDR and approval from the Federal Aviation Administration (FAA). Although SpaceX has yet to confirm a target date, it is likely sometime toward the end of May. SpaceX had already performed a full-duration and full-thrust static fire of the 33 engines of the Super Heavy Booster earlier in May, and showed off imagery of the complete Starship V3 stack on May 9. Time is running out for the company. NASA has stated that it aims to launch the Artemis III mission at the end of 2027, intended to test hardware for a planned lunar landing the following year. SpaceX is contracted to produce a lunar lander for the US space agency, and getting the third version of Starship into space is an essential part of those plans. This next mission, Flight 12, will not be troubling orbit as SpaceX tests the changes made to the new version of the launcher. Future launches must, however, reach orbit if the company is to stand a chance of meeting NASA's requirement for a rendezvous demonstration and check-out as part of Artemis III. ®
Categories: Linux fréttir
Lawsuit brought by former store operators missing from Vodafone results
Vodafone has not listed a potential liability in its 2026 financial results stemming from a legal claim by franchise operators who allege they were harmed by company-imposed business decisions. The Fairer Franchise campaign group represents 62 current and former Vodafone franchisees, who are bringing an £85 million ($115 million) High Court claim, alleging the telco unilaterally cut commissions and overhauled the way it compensated them for operating Vodafone-branded stores, often without consultation. The claimants, some of whom are former employees of Vodafone, say they were encouraged to invest heavily in Vodafone stores after the firm established a franchise program in mid-2017. This expanded to around 400 branches, 183 of which were operated by the claimants in the case. Vodafone is alleged to have repeatedly and unilaterally cut the commissions paid to franchisees for sales of its products and services, particularly from July 2020 onward. The group also claims Vodafone changed remuneration models without consultation or proper consideration of the impact this would have on the franchised businesses. In particular, the claimants allege Vodafone unlawfully clipped remuneration from August 1, 2020, by reducing the commission rates on customer and home broadband upgrade transactions, and that it restructured the calculation of commission to franchisees in a manner beneficial to itself, as part of the rollout of a scheme called "EVO" in June 2021. At the heart of the case is the group's claim that the franchisees were effectively "commercial agents" of Vodafone – within the meaning of the Commercial Agents Regulations – because they sold products and contracts on Vodafone's behalf. Vodafone denies this and says the regulations do not apply. If the High Court rules that they are applicable, the franchise operators may be entitled to termination indemnities that the claimants estimate could be worth up to £52 million ($70 million) alone. The group says Vodafone has already conceded aspects of the claim in court, including admitting breach of contract in relation to rent-free periods for some stores that were not passed on to the franchisees. A spokesperson for the Fairer Franchise group told The Register: "Vodafone has again failed to disclose our £85 million High Court claim as a contingent liability in today's results, while quietly paying more than £20 million to other franchisees with no explanation, and after admitting it breached our contracts over rent-free periods never passed on to us." "We are 62 people who lost our businesses, our savings, and in many cases our health. As VodafoneThree prepares to reshape two retail estates, the question for investors and analysts is whether this management team should press ahead while serious allegations about its treatment of franchisees remain unresolved." The next hearing is scheduled for July 9. The Register asked Vodafone for a statement regarding the group's claims and why it did not mention the case as a potential liability in its financial results. In its results report published Tuesday, Vodafone says: "Legal proceedings where the Group considers that the likelihood of material future outflows of cash or other resources is more than remote are disclosed below. Where the Group assesses that it is probable that the outcome of legal proceedings will result in a financial outflow, and a reliable estimate can be made of the amount of that obligation, a provision is recognized for these amounts." For the UK, Vodafone lists two lawsuits. One involves alleged overcharging of customers who signed contracts that included both a handset and airtime. The other covers alleged collusion between the major UK mobile networks to withdraw their business from Phones 4U, causing its collapse. There is no mention of the Fairer Franchise case. Vodafone Group's fiscal 2026 results showed an 8 percent year-on-year increase in revenue to €40.5 billion ($47.6 billion), attributed to strong services growth and the consolidation of Three UK. Service revenue grew 8.8 percent to €33.5 billion ($39.3 billion), although for the UK the rise was just 0.3 percent. ®
Categories: Linux fréttir
NHS England confirms: Palantir staff can access patient data
The National Health Service in England has confirmed it is allowing staff from Palantir access to patient data following a change in policy. The US spy-tech firm provides the technology for the Federated Data Platform (FDP), under a £330 million ($446 million) contract it won in 2023. The system is designed to improve data sharing across the NHS in England and help the state healthcare provider recover from the pandemic backlog. Under previously agreed rules, Palantir staff working on the FDP could only access the National Data Integration Tenant (NDIT), a data repository for patient data before it is transferred to the "pseudonymized" analytics system, if they apply to access for specific data sets. A document released by NHS England says that Palantir staff can get a new "admin" role and access the NDIT and its identifiable patient data. Other consultants working on the FDP will get similar access. The briefing document, seen by the FT and confirmed by The Register, said granting access to the data to Palantir staff and others could "risk of loss of public confidence" in its assurances about "safeguarding patient data and ensuring appropriate use and access to it." The Register understands the change is designed apply to a small number of people working on the new central data collection platform, used to monitor NHS performance using the NDIT. An NHS England spokesperson said: "The NHS has strict policies in place for managing access to patient data and carries out regular audits to ensure compliance - including monitoring the work of engineers helping to set up the central data collection platform that will track NHS performance and help improve care for patients. “Anyone external requiring access must have government security clearance and be approved by a member of NHS England staff at director level or above.” Sam Smith, coordinator at health privacy campaign group medConfidential, said Palantir and other consultants have already been able to access patient data - albeit pseudonymized in some cases - in other tenants of the FDP. But NHS England became unstuck because of its lack of clarity, he said, adding: "It's the equivalent of telling a civil servant, 'Only you can read your email' and then going, 'Oh, but freedom of information exists'. It is just a lack of transparency that we got through a leak, rather than saying 'We're going to do this thing, here's what it will mean'." NHS England is the quango which runs the NHS in England under the Department for Health and Social Care. The incumbent Labour government is disbanding NHS England and plans to run the service directly from the Whitehall department. In March, the Health Service Journal reported that nearly a third of NHS trusts connected to the FDP in 2025 were not meeting data security standards. An NHSE spokesperson told the publication: "The Federated Data Platform has data protection and cyber security at its core, which is why the NHS has worked with local organizations to ensure they meet the required standards and have introduced strengthened measures where appropriate." The minister responsible for the FDP, Zubir Ahmed, told MPs last month that NHS England and NHS organizations would "retain full control as data controllers, including over decisions about how data is used, who can access it and which products are deployed." He said: "Palantir does not own the data, the products or the intellectual property, nor can it use the NHS data for its own purposes." He said: "Palantir operates strictly within a UK-regulated contract where the NHS controls all data, access is tightly governed, and information can be used only for agreed purposes that benefit patients." When it launched the FDP contract, the NHS said patient data would be protected through "clear regulations, security measures, retained within the UK region, with access fully audited and NHS cyber security monitoring and protection." Palantir was awarded the FDP contract after winning a succession of pandemic-era deals, worth a combined £60 million, without competition. ®
Categories: Linux fréttir
Frontier AI safety tests may be creating the very risks they're meant to stop
Frontier AI safety testing is becoming a security nightmare of its own, with a new RUSI report warning that the process of granting outsiders access to inspect powerful AI models is itself creating new security risks. The paper, published Tuesday by London-based think tank Royal United Services Institute (RUSI), warns that the rapidly expanding system of third-party AI evaluations is riddled with inconsistent standards, vague terminology, weak access controls, and security assumptions that would make most enterprise infosec teams break out in hives. The report focuses on a growing problem facing governments and AI companies alike: meaningful safety testing requires outsiders to access advanced models, but every new access pathway creates another opportunity for theft, tampering, espionage, or abuse. That gets especially risky when the systems in question are being evaluated for capabilities related to cyberattacks or chemical and biological weapon development. "The security risks associated with this access, from intellectual property leakage to model compromise to exploitation by state-sponsored actors, remain poorly mapped and inadequately standardized," the authors wrote. RUSI argues that the industry has drifted into a situation in which labs, evaluators, governments, and researchers are all operating under different definitions of what "secure access" actually means. One evaluator might get limited API access, while another receives deeper visibility into model internals, infrastructure, or training environments. The paper introduces what it calls an "Access-Risk Matrix" designed to map different types of model access against different threat scenarios. Unsurprisingly, handing outsiders write access to frontier models lands firmly in the "what could possibly go wrong?" category. "Write access to model internals represents the access type with the highest level of risk," the report warns, because it potentially allows adversaries to tamper with model behavior directly. The report also punctures the industry's tendency to frame frontier AI security as some entirely new class of problem requiring magical new solutions. Some of the biggest risks identified by the authors are depressingly familiar: stolen credentials, poor credential hygiene, weak access revocation, and overprivileged users. In other words, the same identity and access management problems corporate security teams have wrestled with for decades, except now attached to systems being tested for catastrophic misuse risks. RUSI also warns that the lack of internationally standardized rules governing AI evaluations is creating openings for hostile states, criminal groups, and rogue insiders to exploit gaps between jurisdictions and organizations. "Access decisions remain ad hoc, security expectations are inconsistent and the language used to describe access levels varies across jurisdictions, organizations and agreements," the paper states. The report ultimately calls for formalized international governance frameworks and closer coordination between cybersecurity professionals and AI safety researchers before the current patchwork system turns into the world's most expensive lesson in privileged access management. ®
Categories: Linux fréttir
Cache-poisoning caper turns TanStack npm packages toxic
An attacker has published 84 malicious versions of official TanStack npm packages, with the impact including credential theft, self-propagation, and complete disk wipe of an infected host. The attack is part of a wave of attacks across npm and PyPI, continuing the Mini Shai-Hulud campaign. Supply chain security company Socket reports that other compromised packages include the OpenSearch client, Mistral AI, UiPath, and Guardrails AI. Malicious npm packages for TanStack, an open source application stack, were published between 19:20 and 19:26 UTC on May 11. The attack was detected and reported within 30 minutes by StepSecurity, triggering incident response and npm deprecation. GitHub published a security advisory at 21:30 UTC, including a list of affected packages. TanStack founder Tanner Linsley published a postmortem describing how the attacker used a malicious commit on a fork to create a pull request on the TanStack repository, causing scripts to auto-run and build the malware. This poisoned the GitHub Actions cache in what Linsley said is a variant of a known GitHub Action vulnerability discovered in 2024. The malware then extracted the npm OpenID Connect (OIDC) token, used for trusted npm publishing, from runner memory using the same code used to compromise tj-actions in an attack last year. No TanStack maintainers were compromised. StepSecurity has a detailed analysis of the attack, noting that the payload "reads files from over 100 hardcoded paths" including those that may contain cloud credentials, SSH (secure shell) keys, developer tool configuration files, crypto wallets, VPN configurations, messaging credentials, and shell history. Shell history may contain tokens and passwords pasted into the terminal. Security researcher Nicholas Carlini warned the payload "installs a dead-man's switch… as a system user service." The service checks whether a stolen GitHub token has been revoked and, if it has, runs a command to wipe the local disk completely. Socket's write-up includes recommended actions such as rotating all secrets on any affected system. GitHub's advisory suggests "any developer or CI environment that ran npm install, pnpm install, or yarn install against an affected version on 2026-05-11 should be considered compromised." The Mistral AI has also been reported reported on GitHub, and at the time of writing, the Mistral AI project is quarantined on PyPI. This attack is still evolving and will likely have a far-reaching impact. It confirms again that running everyday commands like npm install is unsafe, that for all their efforts major package repositories including npm and PyPI are still not secured, and that software development is now best done in isolated, ephemeral environments. ®
Categories: Linux fréttir
EU browser choice rules send millions more users Firefox's way
The EU's Digital Markets Act (DMA) has been kind to Mozilla, which says Firefox use is on the up as Europeans are given a choice of default browser on mobile. Through these browser selection screens, the company reckons 6 million users have opted for Firefox instead of what would otherwise have been Safari or Chrome, depending on whether they used an iPhone or Android device. Moz has seen the greatest success on iGadgets, with a 113 percent increase compared to a mere 12 percent rise on Android. This is less likely to be explained by overwhelming disdain for Safari than by the ways in which Apple and Google implemented these browser choice screens. Android devices display the browser selection screens upon first boot or after factory reset, whereas iPhone and iPad users are now shown the same screen as soon as they open Safari for the first time. The DMA obligations began applying in March 2024. Apple's implementation of the EU requirements was always going to lead to more people being prompted to select their browser than Google's, which mostly applies to new Android owners after the DMA was enforced, rather than existing users. Mozilla won't care, though, because not only are user numbers up, but user retention is also looking good – it is five times higher than before the DMA, by its reckoning. Other browser vendors have reported similar results, according to a recent European Commission review [PDF] of the DMA's efficacy, although it didn't cite any specific figures. Few vendors have published long-term results like Mozilla's, although Aloha, Brave, Opera, and Vivaldi all reported sizable uplifts in users in the initial days and weeks following the DMA's enforcement. Further, in recent publications [PDF], DuckDuckGo said around 40 percent more users selected its browser on Android thanks to the DMA browser choice screen. The privacy-focused tech biz offered the statistic in its submission to the UK government's consultation on how to maintain competition in online search. Moz also submitted its thoughts on the topic, and unsurprisingly, given they both benefited massively from them, both vendors want the same DMA-style browser choice screens to feature in the UK market. DuckDuckGo said they should be shown to users annually, and Google should be forced to remove its "Switch back to Google" prompt in Chrome. Mozilla wants the browser choice screens to be delivered to UK users in 2026, for the same users also to be presented with similar screens for default search engines, and for these measures to be enforceable rather than relying only on voluntary commitments from the relevant vendors. Criticizing the DMA, Moz added that it would also like to see the same measures applied to desktop browsers, alleging that Microsoft deploys deceptive design tactics to push its Edge browser. ®
Categories: Linux fréttir
Microsoft makes Copilot easier to summon, harder to ignore in Office
Microsoft is "streamlining" access to Copilot within its productivity applications and updating the keyboard shortcut to activate the assistant. "We heard from many of you that you're unsure how to start engaging with Copilot," the company says, though it did not elaborate on where it had heard this. On its Microsoft 365 Copilot feedback forum, the top-voted request was for more granular agent availability controls. Awkwardly, the fifth-most-voted request at the time of writing is "Disable the M365 Copilot Floating Button in Office Apps," which called the feature "highly disruptive." One commenter stated: "Not allowing users to remove this floating bubble is beyond obnoxious." Fortunately for such refuseniks, Microsoft is going to make accessing Copilot more straightforward. First, the company is reducing the number of entry points to its assistant. There will be the Copilot icon in the bottom-right corner of the screen (hover over it to get suggestions), and a contextual entry point when users interact with content (Microsoft gives the example of selecting text). Microsoft has also updated the keyboard shortcuts for its assistant. Hitting F6 now shifts the focus to the Copilot button in the canvas, and the Up Arrow key lets users move between prompts. In addition to setting focus on the Copilot button, Alt+C will move focus to the Copilot Chat pane if it is already open. "Before you know it, Copilot will be editing your content directly from conversation," enthused Microsoft. The first user to comment on Microsoft's announcement wrote: "How to not show the icon at all? Even the docked one is really annoying." Shush you. This is all about helping the "many" users Microsoft has heard from who want to engage with Copilot. The new Copilot button and updated shortcuts are due to reach general availability in Word, Excel, and PowerPoint for Windows and Mac by early June. Mac users will need to hit Cmd + Control + I to set focus on the Copilot button. ®
Categories: Linux fréttir
Windows update prompt joins the Post Office queue
BORK!BORK!BORK! "Let's cross this one off your list" are words to strike fear into the hearts of many a Windows user, particularly when they appear on some Post Office digital signage. Spotted by an eagle-eyed Register reader in East Dulwich, London, the screen is one of two public displays designed to entertain and inform customers waiting to be ignored by a member of staff. The Post Office is a place where objects can be sent and forms completed or collected. It is normally identifiable by a queue of depressed citizens snaking toward (and sometimes beyond) the door, and an impressive ability to have not quite enough staff to ensure all available positions are open. Here, Windows is thankfully relegated to serving up information rather than the all-important task of announcing available counters. The English may be patient queuers, but even they would baulk at a mechanical voice declaring "IRQL_NOT_LESS_OR_EQUAL", followed by the news that Windows needed to dump its memory before service could resume. That said, using Microsoft's finest to run an information screen does seem overkill. "I've always been amazed that a full-fat OS is used on a system that only has to perform a trivial function," our reader noted, and we'd have to agree, particularly when Windows, in this instance, doesn't even seem able to do that right. The message, in theory, is helpful. Windows needs an update and is politely asking when a good time would be. The problem is that, without a keyboard and mouse is available nobody in the queue can help. And, frankly, Windows shouldn't need to ask. Considering the opening times of the average Post Office, there is plenty of time when the doors are locked, and there are no punters on hand to witness the operating system giving itself a jolly good update, with a cheeky reboot or two to finish the job. ®
Categories: Linux fréttir
