Linux fréttir

Bitwarden Scrubs 'Always Free' and 'Inclusion' Values From Its Website

Slashdot - Fri, 2026-05-15 21:00
Bitwarden appears to be undergoing a quiet shift in leadership and messaging. Its longtime CEO and CFO have stepped down, while the company has removed "Always free" from a prominent password-manager page and replaced "Inclusion" and "Transparency" in its GRIT values with "Innovation" and "Trust." Fast Company reports: In February, longtime CEO Michael Crandell moved to an advisory role, according to LinkedIn, with no announcement from the company. His replacement, Michael Sullivan, former CEO of both Acquia and Insightsoftware, touts his experience with "all facets of mergers and acquisitions" on his own LinkedIn page, including experience working with leading private equity firms. CFO Stephen Morrison also left Bitwarden in April, replaced by former InVision CEO Michael Shenkman. Both Crandell and Morrison joined the company in 2019. Kyle Spearrin, who started Bitwarden as a fun hobby project in 2015, remains the company's CTO. Meanwhile, Bitwarden has made some subtle tweaks to its website. The page for its personal password manager no longer includes the phrase "Always free." Previously this appeared under the "Pick a plan" section partway down the page, but that section no longer mentions the free plan, though it remains available elsewhere on the page. Bitwarden made this change in mid-April, according to the Internet Archive. Bitwarden has also stopped listing "Inclusion" and "Transparency" as tentpole values on its careers page. The company has long defined its values with the acronym "GRIT," which used to stand for "Gratitude, Responsibility, Inclusion, and Transparency." After May 4, it changed the acronym to stand for "Gratitude, Responsibility, Innovation, and Trust." The phrase "inclusive environment" still appears under a description of Gratitude, while "transparency" is mentioned under the Trust heading. They're just no longer the focus.

Read more of this story at Slashdot.

Categories: Linux fréttir

Git is unprepared for the AI coding tsunami

TheRegister - Fri, 2026-05-15 20:15
Last month, Mitchell Hashimoto, HashiCorp co-founder, publicly declared that he was moving his popular open source Ghostty terminal emulator project from GitHub. GitHub runs the world’s largest service built on the Git distributed version control system, created by Linus Torvalds. Once an enthusiastic user, Hashimoto grew disillusioned with service disruptions, and increasingly slow pull requests. “This is no longer a place for serious work if it just blocks you out for hours per day, every day,” he wrote. Hashimoto was quick to defend Git itself: “The issue isn't Git, it's the infrastructure we rely on around it: issues, PRs, Actions, etc.” Many have blamed GitHub’s performance on Microsoft, which acquired the company in 2018. But to be fair, GitHub itself has been experiencing heavier-than-expected traffic thanks to a proliferation of AI-generated pull requests. In 2025, GitHub saw a 206 percent year-over-year growth in AI-generated projects measured by the use of Bash shell scripts, a widespread way of running agents. And more AI code means more bugs. Research from GitClear found that AI-generated code heaped 10.83 issues per pull request, compared to 6.45 for the old-fashioned human variety. Our new agentic workforce is raising big questions about how the entire software development lifecycle (SDLC) should evolve, and if Git should come along. “Agents are nudging us toward a continuous flow,” warned Peco Karayanev, co-founder of DevOps platform provider Autoptic, which bridges Git-based deployments with observability tools for agent-based remediation. Autoptic’s entire user base runs on some form of Git, either homebrew or from a service provider like GitLab. Given the volume and magnitude of changes across repos, “we need git to start operating in a more continuous mode,” Karayanev wrote in an email interview. Git operations, especially when used in GitOps-style automated deployments, still need to be managed by people. Updates, commits, pushes, merges are often yoked into sequences of “stop/go” episodes where someone has to hit enter on the keyboard a few times to continue the workflow, Karayanev noted. This model may not hold up once agents start getting priority. A butler for Git Git has always had its share of critics, especially those who use the tool daily. There may not be another piece of software that is so widely adopted and yet so inscrutable. Torvalds and other Linux kernel developers built Git in 2005 after frustrations with trying to shoehorn Linux code into the commercial BitKeeper tool. Linux, a global group project of mammoth proportions, required a distributed version control system able to support non-linear development of thousands of parallel branches. Like any distributed system, Git can be difficult to understand. One of the co-founders of GitHub, Scott Chacon co-wrote a book on using Git (2009’s Pro Git) and still he finds himself occasionally flummoxed by the version control system. There are still “sharp edges” to Git, Chacon told The Register. “There's a lot of stuff that it doesn't do very well from a usability standpoint,” he said. Chacon co-founded GitButler as a way to “rethink the porcelain” of Git, to make Git more suitable to modern workflows. (Last month, GitButler received $17 million in venture capital funding). Think of GitButler as a super-powered Git client. It allows the developer to work on two different branches simultaneously, using a technique called virtual branching. It reconciles the code a developer is working on with the upstream code. They can reorder commits, or edit the comments of a previous commit. It offers richer metadata about the files being worked on. It can show which commits are unique to that branch. Best of all, it eliminates what many developers call “rebase hell,” where merges into an updated codebase must be checked one at a time, a problem GitButler solves by keeping the user’s code synchronized with what is upstream. Many of these actions GitButler offers can be done through the Git command itself – although Git’s command language, and its rules, can be so obtuse that “you will probably make a mistake at some point,” Chacon said. A Git for agents Chacon believes GitHub’s current reliability issues stem from the current tsunami of agentic work. This is “ironic” because GitHub was built to scale Git, he said. “But an influx of agents is pushing the service to the brink.” The problem lies not with Git itself, but with everyone using one service, Chacon argued. Last year, GitHub had about 180 million users working across 630 million repositories – with 121 million created in 2025 alone, according to the company’s most recent annual Octoverse report. “From the longer-term perspective, it doesn't need to be like this,” he argued. Maybe Git should be run locally, mirrored globally and managed with clients … such as GitButler, Chacon suggested. Perhaps Git-based version control systems could be customized for specific industry verticals. We need to think about how we “distribute these systems more,” he said. “Git is designed to be distributed but we’re not distributing it,” he said. GitButler has created a command line interface specifically for agents. It was designed to give MCP servers an integrated map of the repository, which otherwise would require stitching together multiple Git commands. The Virtual Files concept allows the agent to work on a section of code that is also being worked on by a developer, or another agent. These are changes that point to a rethinking of how a Git workflow should run. “I think all of these systems should fundamentally change, because all of our workflows have changed, right? There needs to be different, sort of primitives for how to deal with these problem sets,” Chacon said. A tip from gaming development One company that wants its platform to replace Git altogether is Diversion, which has built an eponymous distributed version control system initially pitched for large-scale game design. “Git's architecture is actually an issue that prevents scaling,” argues Diversion CEO Sasha Medvedovsky in an interview with The Register. “Fundamentally it's an architecture problem that can't be fixed and is a bottleneck for end users and hosting services.” Git is a distributed system insofar as every user, or hosted service, requires a dedicated database (much like blockchain). “It's not distributed in the regular sense but rather replicated,” he wrote in an exchange with The Register on LinkedIn. Operations run on a single thread, making concurrent operations impossible. As a result, the larger the repository, the slower the commit operations – a deadly combination for fast-paced agentic software development, Medvedovsky noted. Of course, every CEO will have their talking points ready about a competitor’s weaknesses (Diversion is finalizing a blog post with hard numbers about Git and GitHub performance). But there are a growing number of other initiatives around prepping Git for the challenging times ahead. Perhaps most notable is Jujutsu, a Git-compatible distributed version control system, stewarded by Google senior software engineer Martin von Zweigbergk. Like GitButler, Jujutsu (jj) aims to eliminate a lot of the annoyances that come with Git. It includes an undo button and the ability to keep committing even when there is a conflict. And because everything written in C must be recast into Rust these days, long-time Git contributor Sebastian Thiel started a project called Gitoxide to rebuild Git in Rust. Potential benefits include significant performance improvements through multicore processing, and the much-needed memory safety that comes with Rust. Will Git 3 solve all the problems? Git’s chief maintainer is Junio Hamano, who took the reins from Torvalds in 2005. And he remains busy keeping Git current. At FOSDEM this February, core Git contributor and GitLab engineering manager Patrick Steinhardt discussed some of the changes coming in the next version of Git, version 3, which is gradually being rolled out this year. One of the chief improvements will be in the way Git manages the commit references, the IDs that point to each change being made. Surprisingly, this operation is a real bottleneck for the software. “The design is inefficient,” Steinhardt told the audience. Every time a programmer commits a code change, it gets recorded in a “packed-refs” file, which saves time by not giving each commit its own reference file. As projects grow larger, however, it takes longer for Git to amend or to delete a reference in packed-refs (One GitLab repo has a packed-refs file of more than 20 million references, Steinhardt said). This is especially problematic when you have multiple, simultaneous readers and writers of that file. And just forget about getting a consistent view of all the references. The freshly implemented Reftable feature, which will be the default in Git 3.0, stores references in an indexable binary format. The Git folks borrowed this concept from the Eclipse Foundation’s JGit Java implementation of Git. Reftable allows for block updates, eliminating the need to rewrite a 2 GB-sized file for a single entry. And it is much faster for reading, which would pave the way for Git supporting larger, more sprawling repositories – perfect for an ever-busy agentic workforce. For nearly two decades, Git has proved to be the version control system of choice for geeks worldwide. But even with these new features and various third-party enhancements, can it retain relevance for a new generation of agentically enhanced coders? The battle is on. ®
Categories: Linux fréttir

The Era of 15GB Free Gmail Storage Is Ending

Slashdot - Fri, 2026-05-15 20:00
Google has confirmed it is testing a 5GB storage limit for some new Gmail accounts, with users able to unlock the standard 15GB by adding a phone number. Android Authority reports: While the company didn't mention which regions are impacted, user reports from yesterday were mostly from African countries. That said, if Google's tests prove successful, this could possibly become the norm for new sign-ups in more regions. The company could be testing ways to discourage users from creating multiple Gmail accounts to access free cloud storage. However, if you already have a Gmail account with 15GB free storage, it shouldn't be impacted by this change. The language on Google's support page mentions "up to 15GB of storage." However, it's a recent change. An archived version of the support page from February did not use the words "up to." Whether the test has been running since early March or Google updated its language before it ever started the test, it's evident that the company could roll out the change globally as well.

Read more of this story at Slashdot.

Categories: Linux fréttir

AI agents show they can create exploits, not just find vulns

TheRegister - Fri, 2026-05-15 19:45
Sure, AI agents such as Mythos can find security vulnerabilities in software, but the bigger question is whether they can turn those flaws into functional exploits that work in the real world. After all, many AI-discovered bugs prove minor or difficult to weaponize. New research, however, suggests frontier models can indeed develop working exploits when directed to do so. To better understand the rapidly changing security landscape, computer scientists from UC Berkeley, Max Planck Institute for Security and Privacy, UC Santa Barbara, Arizona State University, Anthropic, OpenAI, and Google decided to build ExploitGym, a benchmark for evaluating the exploitation capabilities of AI agents. This is not an entirely disinterested set of investigators – Anthropic, OpenAI, and Google all sell AI services. And both Anthropic and OpenAI have talked up the risk of leading models Claude Mythos Preview and GPT-5.5 while selling access to government partners. Since Anthropic announced Mythos in early April, the security community has been critical of the company's approach, described by some as fear-mongering. And various security experts have made the case that even commercially available AI models can find security flaws. Nonetheless, Mythos and GPT-5.5 outshine their peers in ExploitGym, as described in the paper, "ExploitGym: Can AI Agents Turn Security Vulnerabilities into Real Attacks?" ExploitGym consists of 898 real vulnerabilities found in applications, Google's V8 JavaScript engine, and the Linux kernel. Its workout consists of presenting an AI agent with a vulnerability and proof-of-concept input that triggers it, to see whether the agent can create an exploit capable of arbitrary code execution. According to the UC Berkeley Center for Responsible Decentralized Intelligence, Mythos Preview successfully exploited 157 test instances and GPT-5.5 managed 120 in the allotted two-hour window. "Even when standard security defenses like ASLR or the V8 sandbox were turned on, a meaningful number of exploits still worked," the boffins wrote in a blog post. "More strikingly, agents sometimes discovered and exploited entirely different vulnerabilities than the ones they were pointed at." The agents (CLI + model) tested were Claude Code with Claude Opus 4.6, Claude Opus 4.7, Claude Mythos Preview, and GLM-5.1; Codex CLI with GPT-5.4/GPT-5.5; and Gemini CLI with Gemini 3.1 Pro. And even the ancient models released in February (Opus 4.6 and Gemini 3.1 Pro) had some success. The researchers say that one of their more interesting findings is that these models sometimes went "off-script" in capture-the-flag (CTF) environments, where an agent has to find and retrieve some hidden value. This was most evident with Mythos Preview and GPT-5.5. The former succeeded in 226 CTF exercises but only used the intended bug in 157 instances, while the latter captured 210 flags and only used the intended bug in 120 of those cases. The authors also note that while there was some overlap in the exploits discovered, the various models found different exploits. This suggests applying a diverse set of models might be advantageous both in attack and defense scenarios. It's worth adding that ExploitGym tests were done with security guardrails disabled. When the test was re-run on GPT-5.5 with default safety filters active, the model refused 88.2 percent of the time before making any tool call. The Register, however, has seen security researchers craft prompts in a way to avoid triggering refusals. So safeguards of that sort have limits. "Our results show that autonomous exploit development by frontier AI agents is no longer a hypothetical capability," the authors state in their paper. "While current agents are not yet reliable across all targets, they already exploit a non-trivial fraction of real-world vulnerabilities, including complex targets such as kernel components." ®
Categories: Linux fréttir

LocalSend puts your sneakernet out of business

TheRegister - Fri, 2026-05-15 19:10
FOSS It happens all the time. You have a file on one of your devices and you need to have it on another one. You could put the file on a USB flash drive and walk it over (the so-called sneakernet), you could email it to yourself, or you could try to set up some kind of network resource. LocalSend, a free open source tool, makes the process of sharing files on a LAN easier than anything else and it works on Windows, Linux, macOS, Android, and more. The Reg FOSS desk is not routinely a fan of Apple fondleslabs. (We’ve tried, but they’re a bit too locked down for us.) Saying that, from what we’ve heard, LocalSend is a bit like Apple’s AirDrop but for grown-up computers and non-Apple kit. For Linux Mint users, it’s a bit like the included Warpinator – and as that page says, don’t search for it and go to warpinator.com, as it’s a fake site. It’s a free download from its GitHub page and is also available in Canonical’s Snap store and on Flathub. You run it, and it gives that computer a cute nickname in the form of (adjective)+(fruit). Run it on two computers on the same local network, and they should see each other. You click “send” on one, and “receive” on the other, and that’s about it: pick the file or folder, and off it goes. LocalSend isn’t very big – the installation packages are mostly around the 15 MB mark – so it’s pretty fast to download or install. This vulture found and tried it when we downloaded a just-over-4 GB file and then worked out we’d downloaded it onto the wrong OS on the wrong machine. It takes a good few minutes to download several gigabytes – we live on a small, remote island, where our 100 Mbps broadband costs about four times what 1 Gbps broadband used to cost in Czechia – and it seemed worth trying to transfer it rather than grab another copy. The gist of the idea is that LocalSend is quicker than using a USB key. You know the sort of process: find a big enough USB key, check it has space, copy the file onto it, eject it, go to the other machine, insert it, and copy the file off again. Even if it goes perfectly, LocalSend is still less hassle. It’s also easier than configuring some kind of temporary folder-sharing setup between different OSes on different computers with different login names. (The Irish Sea wing of Vulture Towers recently moved house and has yet to finalize his office layout and reconnect his NAS servers. It’s climbing to the top of the to-do list, though.) LocalSend is also available on both the iOS App Store and Google Play Store, so it can help for devices that you can’t readily plug a USB key into. The transfer happens across your local network, so it won’t use up bandwidth on metered internet connections, and will even work if your internet connection is down. Warpinator is Mint’s solution – but in our case, we initially needed to move the file from Windows to macOS. Both have ports of Warpinator, but both seem unofficial, and while the machines could see one another, file transfers failed. We’ve also tried SyncThing, but it’s not good at keeping machines in sync when they’re rarely on at the same time – and we’ve had problems with it recursively duplicating directory trees into themselves so deeply that no GUI tool could delete them. Ideally, you should have an always-on home server that also runs SyncThing – and if you have one of those, then for one-off file transfers, you don’t really need SyncThing: just copy it to the server, and off again. LocalSend just worked, and for us, it worked identically whether either end was running Windows, Linux, or macOS. We couldn’t ask for more. ®
Categories: Linux fréttir

Bill To Block Publishers From Killing Online Games Advances In California

Slashdot - Fri, 2026-05-15 19:00
An anonymous reader quotes a report from Ars Technica: A bill focused on maintaining long-term playable access to online games has passed out of the California Assembly's appropriations committee, setting up a floor vote by the full legislative body. The advancement is a major win for Stop Killing Games' grassroots game preservation movement and comes over the objections of industry lobbyists at the Entertainment Software Association. California's Protect Our Games Act, as currently written, would require digital game publishers who cut off support for an online game to either provide a full refund to players or offer an updated version of the game "that enables its continued use independent of services controlled by the operator." The act would also require publishers to notify players 60 days before the cessation of "services necessary for the ordinary use of the digital game." As currently amended, the act would not apply to completely free games and games offered "solely for the duration of [a] subscription. Any other game offered for sale in California on or after January 1, 2027, would be subject to the law if it passes. [...] In a formal statement of support for the bill sent to the California legislature, SKG wrote that "there is no other medium in which a product can be marketed and sold to a consumer and then ripped away without notice As live service games rise in popularity for game developers and gamers alike, end-of-life procedures are essential tools to ensure prolonged access to the games consumers pay to enjoy." The Entertainment Software Association, which helps represent the interests of major game publishers, publicly told the California Assembly last month that the bill misrepresents how modern game distribution actually works. "Consumers receive a license to access and use a game, not an unrestricted ownership interest in the underlying work," the ESA wrote. The eventual shutdown of outdated or obsolete games is "a natural feature of modern software," the group added, especially when that software requires online infrastructure maintenance. The ESA also said the bill would impose unreasonable expectations on publishers regarding licensing rights for music or IP rights, which are often negotiated on a time-limited basis. "A legal requirement to keep games playable indefinitely could place publishers in an impossible position -- forcing them to renegotiate licenses indefinitely or alter games in ways that may not be legally or technically feasible," they wrote.

Read more of this story at Slashdot.

Categories: Linux fréttir

OpenAI Now Wants ChatGPT To Access Your Bank Accounts

Slashdot - Fri, 2026-05-15 18:00
OpenAI is previewing a feature that lets ChatGPT Pro users connect bank and investment accounts through Plaid, allowing the chatbot to analyze spending, subscriptions, balances, portfolios, debt, and major financial decisions. "More than 200 million people are already going to ChatGPT every month with finance questions -- from budgeting to tips on how to cut back on spending," OpenAI said in its announcement. "Now, users can securely connect their financial accounts with Plaid to get the full view of their financial picture in the context of their personal goals, lifestyle, and priorities that they've shared with ChatGPT, powered by OpenAI's advanced reasoning capabilities." The Verge reports: When financial accounts are connected, OpenAI says that ChatGPT users can view a dashboard that details their spending history, including any active subscriptions. Users can also ask it to help with financial decisions like buying a house or signing up for credit cards and flag any changes in spending habits. This financial feature will be initially available to users in the US who subscribe to ChatGPT's $200-per-month Pro tier. "We'll learn and improve from early use before rolling it out to Plus, with the goal of making it available to everyone," says OpenAI. To assuage concerns, OpenAI promises users "control over their data," including the ability to disconnect their bank accounts from ChatGPT at any time, though the company has up to 30 days to delete your data from its systems. You can also view and delete "financial memories" like goals or financial obligations saved by the chatbot. User control extends to whether your data is fed back into AI models -- users can enable the option to "Improve the model for everyone" to allow financial data in their ChatGPT conversations to be used for training AI, for example. OpenAI also says ChatGPT can't make any changes to your bank accounts or see "full account numbers."

Read more of this story at Slashdot.

Categories: Linux fréttir

Microsoft puts stability in the driver's seat with new initiative

TheRegister - Fri, 2026-05-15 17:25
Microsoft has laid out plans for how it and its partners will deal with iffy drivers causing stability problems in the company's flagship operating system. Dubbed the Driver Quality Initiative (DQI), Microsoft has outlined four pillars to support the program. These are Architecture – hardening kernel-mode drivers and enabling third-party kernel-mode drivers to transition to user mode; Trust – raising the bar for trusted partners and drivers; Lifecycle – addressing outdated and low-quality drivers; and Quality Measures – going beyond simple crash counts to measure driver quality. It's all very laudable, although, aside from references in the architecture pillar, Microsoft's WinHEC 2026 announcement said little about how Redmond ended up in a situation where drivers can run at a privilege level that allows a failure to leave the operating system hopelessly borked. The infamous CrowdStrike incident of 2024, which crashed millions of Windows devices, ably demonstrated the dangers of drivers running around in the Windows kernel. Microsoft later blamed a 2009 undertaking with the European Commission for how that situation came to be, although it skipped over the whole not-creating-an-API-so-security-vendors-didn't-need-kernel-access part. In the months after the CrowdStrike incident (or "learnings", as Microsoft delicately put it), the Windows Resiliency Initiative was announced. According to Microsoft, "DQI builds on the learnings and infrastructure established through the Windows Resiliency Initiative." Drivers are the bane of many Windows users. A faulty driver can make the entire operating system unstable. Sure, a customer might wonder how such a situation has been allowed to happen. Still, we are where we are, and dealing with it requires Microsoft to harden the operating system and provide ways for vendors to work with Windows that don't involve breaking down the kernel's doors. Those same vendors need to ensure that drivers are high-quality and reliable. "Driver and platform quality," wrote Microsoft, "is central to the customer experience." The company has espoused much in recent months about how it intends to "fix" Windows after a disastrous few years that have taken a hatchet to consumer confidence. Fripperies like moving the taskbar and rethinking Redmond's relentless pushing of Copilot are one thing. Dealing with driver-related crashes is quite another. WinHEC 2026 has shown that at least some within Microsoft are determined to deal with the fundamentals, and that requires taking the Windows maker's hardware partners along for the ride. ®
Categories: Linux fréttir

ArXiv to Ban Researchers for a Year if They Submit AI Slop

Slashdot - Fri, 2026-05-15 17:00
ArXiv says it will ban authors for one year if they submit papers containing AI-generated slop, such as hallucinated citations, placeholder text, or chatbot meta-comments left in the manuscript. "If generative AI tools generate inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content, and that output is included in scientific works, it is the responsibility of the author(s)," said Thomas Dietterich, chair of the computer science section of ArXiv, on X. "We have recently clarified our penalties for this. If a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can't trust anything in the paper." 404 Media reports: Examples of incontrovertible evidence, he wrote, include "hallucinated references, meta-comments from the LLM ('here is a 200 word summary; would you like me to make any changes?'; 'the data in this table is illustrative, fill it in with the real numbers from your experiments.'" "The penalty is a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted at a reputable peer-reviewed venue," Dietterich wrote. Dietterich told [404 Media] in an email on Friday morning that this is a one-strike rule -- meaning authors caught just once including AI slop in submissions will be banned -- but that decisions will be open to appeal. "I want to emphasize that we only apply this to cases of incontrovertible evidence," he said. "I should also add that our internal process requires first a moderator to document the problem and then for the Section Chair to confirm before imposing the penalty."

Read more of this story at Slashdot.

Categories: Linux fréttir

Google'll grab your gigs if you don’t cough up your number

TheRegister - Fri, 2026-05-15 16:09
Google is testing a storage reduction for new accounts unless a phone number is provided. The change the Chocolate Factory is trialing affects new accounts, reducing the free storage from 15 GB to a miserly 5 GB unless the user provides a telephone number. Not all new users are impacted. We created a Gmail account today, and were given the full 15 GB of storage without being required to provide a phone number (although it did ask for one for activation code purposes). The test is also regional and, it must be emphasized, is just that at this stage – a test. However, it could point to a future where tech vendors demand more data in return for using a 'free' service. Arguably, we're living in that future right now. A Google spokesperson told The Register: "We're testing a new storage policy for new accounts created in select regions that will help us continue to provide a high quality storage service to our users, while encouraging users to improve their account security and data recovery." A Reddit thread on the matter contained all manner of theories regarding what the data might be used for, including nefarious commercial purposes. Judging by the screenshot, Google is trying to curb people who create multiple accounts to gain more storage. 15 GB is not a lot of storage these days, particularly given the relentless growth in media file sizes. That said, a drop to 5 GB would bring Google into line with Apple, which gives customers the same amount unless they upgrade to iCloud+. Microsoft gives users 15 GB of free Outlook.com storage, and Proton Mail's free tier gives users 1 GB (initially 500 MB until a starting checklist is completed). Should the test become reality, it could be seen as yet another step on a worrying path. Sure, you can have more free storage: sign here and agree to hand over these bits of your personal information. As demand for storage increases, vendor offerings are looking ever more miserly, and a cut from Google, even with the best of intentions, will rankle. Then again, if you are concerned about privacy and your personal information being used for commercial purposes, it could be that, for all its convenience, Gmail might not be the right tool for you. Reducing storage to 5 GB for new users (existing users aren't affected) unless a telephone number is handed over might be the nudge that some users need to look elsewhere for their email needs. ®
Categories: Linux fréttir

Congress Introduces Bill To Permanently Block Chinese Vehicles From US

Slashdot - Fri, 2026-05-15 16:00
Longtime Slashdot reader sinij shares a report from Car and Driver: A group of Michigan lawmakers has introduced a bill in Congress that would effectively place a permanent ban on Chinese connected vehicles from being sold in the United States. While an executive order signed by Joe Biden in early 2025 already imposed heavy restrictions, the new bill would codify and expand on the ban, as first reported by Autoweek and explained in a release by the House of Representatives Select Committee on China. The bill, titled the Connected Vehicle Security Act, was co-signed by John Moolenaar, a Michigan Republican, and Debbie Dingell, a Michigan Democrat. It joins a companion version of the same Connected Vehicle Security Act introduced last month to the Senate by Sen. Bernie Moreno, an Ohio Republican, and Sen. Elissa Slotkin, a Michigan Democrat. While the wording is similar to that found in former President Biden's January 2025 executive order, the new bill would codify the language into law, as well as determine rules for compliance and enforcement. Specifically, the new bill would restrict Chinese automakers from selling passenger cars in the United States if those vehicles contain any China-developed connectivity software. Officially, the bill covers the sale of vehicles from states deemed "foreign adversary countries," which include China, Russia, North Korea, and Iran. The proposed legislation arrives as Chinese automakers including Chery, Geely, and BYD (maker of the 2026 BYD Dolphin Surf, shown above), continue to rise in prominence in foreign markets around the world. "Doing the right thing for the wrong reasons," comments sinij. "Connected cars that spy on consumers are not a uniquely Chinese problem and should be addressed for all vehicles."

Read more of this story at Slashdot.

Categories: Linux fréttir

Honda Retreats To Hybrids After Failed EV Bet Triggers Record $9 Billion Loss

Slashdot - Fri, 2026-05-15 15:00
An anonymous reader quotes a report from Electrek: Honda is waving the white flag. The Japanese automaker previewed two new hybrids set to launch by 2028 after taking an over $9 billion hit over its failed EV bet, leading to its biggest loss in company history. Honda admitted it was "unable to deliver products that offer value for money better than that of new EV manufacturers, resulting in a decline in competitiveness," after suddenly announcing plans to cancel three new EVs in the US in March, warning restructuring costs could reach 2.5 trillion yen ($15.7 billion). After posting its first annual loss since it became a publicly traded company in 1957 on Thursday, Honda's CEO Toshihiro Mibe revealed the company's comeback plans. Honda is no longer planning to phase out gas-powered vehicles by 2040. Instead, Honda now aims "to achieve carbon neutrality by 2050," including a mix of EVs, hybrids, carbon-neutral fuels, and carbon-offset tech. Starting next year, Honda plans to begin introducing its next-gen hybrids, underpinned by a new hybrid system and platform. Honda said it aims to improve fuel economy by over 10% in its upcoming hybrids. The new system is expected to help cut costs by over 30% compared to Honda's current hybrid system. By the end of the decade, Honda plans to launch 15 new hybrid models globally. In North America, its most important market, the company will introduce larger hybrids in the D-segment or above. Honda previewed two of the new hybrids during the business update: the Honda Hybrid Sedan Prototype and the Acura Hybrid SUV Prototype, which the company said will go on sale within the next two years.

Read more of this story at Slashdot.

Categories: Linux fréttir

NASA's Psyche mission set for a brief encounter with Mars

TheRegister - Fri, 2026-05-15 14:09
More than two years after launch, NASA's Psyche mission will whizz past Mars on May 15, using the planet's gravity to tweak its trajectory and accelerate on to its asteroid destination. The spacecraft, which was launched on October 13, 2023, will pass just 2,800 miles (4,500 kilometers) above the surface of the red planet at 12,333 mph (19,848 kph) on its way to the metal-rich asteroid, Psyche. In February, the spacecraft's thrusters were fired for 12 hours to refine its approach to Mars. That refinement played its part in today's flyby. However, it won't be until a Doppler shift is recorded in the signals from the spacecraft as it passes Mars that scientists will be able to definitively confirm its new speed and trajectory. These techniques are not new. Gravity assist maneuvers have been a thing since the dawn of the space age, and were theorized long before. One of the most famous beneficiaries is the Voyager mission, which took advantage of a rare planetary alignment to undertake a "Grand Tour" of Jupiter, Saturn, Uranus, and Neptune. The trajectory allowed significant propellant to be saved. And, of course, the use of gravity assists highlights the work undertaken by boffins in trajectory planning to calculate exactly how a spacecraft should be launched and what corrections are needed to achieve the required precision. Psyche is due to reach its destination in 2029, and the Mars flyby will allow scientists to check out the spacecraft's payload. For example, the multispectral imager will capture thousands of observations of Mars. According to NASA, Sarah Bairstow, Psyche's mission planning lead at the Jet Propulsion Laboratory in Southern California, said: "This is our first opportunity in flight to calibrate Psyche's imager with something bigger than a few pixels, and we’ll also make observations with the mission's other science instruments." A bit of bonus science is always welcome, as well as a rehearsal for the main event, when Psyche reaches its destination. "Ultimately, though, the only reason for this flyby is to get a little help from Mars to speed us up and tilt our trajectory in the direction of the asteroid Psyche," said Lindy Elkins-Tanton, principal investigator for Psyche at the University of California, Berkeley. "But if all our instruments are powered up, and we can do important testing and calibration of the science instruments, that would be the icing on the cake." ®
Categories: Linux fréttir

Anthropic urges Uncle Sam to kneecap China's AI ambitions before 2028

TheRegister - Fri, 2026-05-15 13:33
AI monger Anthropic wants America and its allies to tighten measures aimed at curbing China's AI progress, warning of the consequences if "authoritarian governments" take the lead rather than Uncle Sam. In a lengthy missive posted on its website, the San Francisco-based org says it expects AI to deliver "transformational economic and societal impacts" in the coming years, and whether the transition goes well depends on where the most capable systems are built first. Since the technology is advancing swiftly, democratic countries have only a limited time in which to act, Anthropic believes. The measures it wants to see are nothing new: enforcing tighter export controls on chips used for AI development, such as Nvidia's GPUs, and cutting off access to American AI models. Recent history suggests these controls "have been incredibly successful," it says. But if Chinese researchers are only several months behind the US in AI capabilities, as many experts estimate, how successful can those efforts have been? AI labs in China have only built models that come close to those in America because of their talent and their knack for exploiting loopholes to get around export controls, Anthropic claims, along with distillation attacks that "illicitly extract the innovations of American companies." Many will suspect this is Anthropic's chief motivation in calling for action against China. Back in February, the Claude model maker accused China-based rivals including DeepSeek of using distillation to train their models by siphoning knowledge from Anthropic's own. As The Register pointed out at the time, accusing China of copying, while using content created by others to train your own models, shows a staggering lack of self-awareness from the AI industry. Anthropic's sermon also shows blinkered thinking. It implies that China can only advance by riding on America's coattails, and is incapable of innovating. This is despite the shockwaves generated by the release of the DeepSeek R1 model early in 2025, believed to be on a par with the best US models. Numerous reports also indicate that Chinese organizations have made huge strides with domestically developed AI silicon, and Beijing even tried to discourage tech companies in the country from buying and using Nvidia chips. Anthropic sets out two scenarios for what the world could look like in 2028, a date when it expects "transformative AI systems" to have emerged. In the first scenario, America has "successfully defended its compute advantage," and "democracies set the rules and norms around AI." The second has China overtaking the US, leading to AI norms and rules being shaped by authoritarian regimes, with the best models enabling "automated repression at scale." Another problem with Anthropic's plan is that many countries, especially in Europe, view both American and Chinese AI supremacy as a threat to democracy. There is a concerted push in Europe for "digital sovereignty" to minimize reliance on US technology, for example. Others warn it could erode democracy in America itself. Anthropic can draw little comfort from the Trump administration, which has a constantly shifting attitude to China. Export controls were said not to be high on the agenda during the President's trip to Beijing this week, and it was reported that the US has now cleared around 10 Chinese firms to buy Nvidia's second-most powerful AI chip, the H200. ®
Categories: Linux fréttir

Exploited Exchange Server flaw turns OWA inboxes into script launchpads

TheRegister - Fri, 2026-05-15 11:51
Microsoft has confirmed a vulnerability in on-premises Exchange Server that could result in surprise script execution in victims' browsers. Tracked as CVE-2026-42897, the flaw affects Outlook Web Access (OWA) and can be triggered by a specially crafted email opened in OWA, assuming "certain interaction conditions are met." The prize for attackers is arbitrary JavaScript execution in the mark's browser context. The advisory describes the flaw as a spoofing vulnerability stemming from cross-site scripting, which will set alarm bells ringing for administrators, and it appears the vulnerability is being exploited. The bug was assigned a CVSS score of 8.1. Exchange Server 2016, 2019, and the latest version, Exchange Server Subscription Edition (SE), are all affected regardless of their update level. A mitigation has been released via the Exchange Emergency Mitigation (EM) Service. However, Microsoft warned the mitigation might break other things – inline images might stop working in the recipient's OWA reading pane (use attachments instead) and the OWA Print Calendar functionality might not work (use a screenshot or the Outlook Desktop client). Finally, OWA Light might not work properly. Microsoft deprecated this in 2024, so affected users should consider an upgrade. The mitigation can also be applied manually in scenarios where customers are not using the EM service. These might be disconnected or air-gapped environments – exactly the sort of environments where on-premises Exchange tends to linger. Microsoft is working on a full security update, although only the Exchange SE version will be publicly available. Exchange 2016 and 2019 customers will receive it only if enrolled in Period 2 of the Exchange Server Extended Security Updates (ESU) program. The second period of Exchange Server ESU kicked off this month, with Microsoft sternly warning that there would be no extensions past its end. The vulnerability does not affect Exchange Online. Microsoft has not given any details on how the exploit works, nor how widely it is being exploited. ®
Categories: Linux fréttir

Patch time for Cisco SD-WAN admins as vendor drops yet another make-me-admin zero-day

TheRegister - Fri, 2026-05-15 11:15
Cisco admins face emergency patch duty after Switchzilla disclosed a max-severity make-me-admin bug affecting Catalyst SD-WAN Controller and Manager. Switchzilla dropped an advisory for CVE-2026-20182 (10.0) on Thursday, saying that both components, formerly known as vSmart and vManage, were vulnerable in all deployment types, and that fixes were available. The bug allows unauthenticated remote attackers to bypass authentication and gain admin privileges on an affected system. According to Rapid7, whose researchers Stephen Fewer and Jonah Burgess found the vulnerability, attackers exploiting CVE-2026-20182 could then start issuing arbitrary NETCONF commands. It means they could steal data, intercept traffic, manipulate an organization's firewall rules, or just bring the network down, opening up opportunities for attackers of all stripes: state-backed, financially motivated, hacktivists – you name it. Offering a high-level overview of the vulnerability, Cisco said: "This vulnerability exists because the peering authentication mechanism in an affected system is not working properly. An attacker could exploit this vulnerability by sending crafted requests to the affected system. "A successful exploit could allow the attacker to log in to an affected Cisco Catalyst SD-WAN Controller as an internal, high-privileged, non-root user account. Using this account, the attacker could access NETCONF, which would then allow the attacker to manipulate network configuration for the SD-WAN fabric." Cisco confirmed that, in May 2026, it became aware that CVE-2026-20182 had been exploited as a zero-day, although it did not attribute the activity. The Cybersecurity and Infrastructure Security Agency (CISA) also added CVE-2026-20182 to its Known Exploited Vulnerabilities (KEV) catalog, which is reserved for the security flaws that are both actively being exploited and threaten federal agencies. The US cyber agency gave Federal Civilian Executive Branch agencies just three days to apply Cisco's patches. While CISA has set similarly short deadlines before, they are rare and typically reserved for vulnerabilities deemed especially urgent. There was no word of the bug being exploited in ransomware attacks. Cisco said in its advisory there are no workarounds available, and it "strongly recommends" applying the available fixes. Any admin responsible for their org's Cisco SD-WAN system should hunt through their logs, Cisco said, and be aware that indicators of compromise may appear among otherwise normal-looking operational logs. Specifically, they should be auditing the auth.log file at /var/log/auth.log for entries related to Accepted publickey for vmanage-admin from unknown or unauthorized IP addresses. Then, check those IP addresses against the configured System IPs that are listed in the Cisco Catalyst SD-WAN Manager web UI, the vendor said. Cisco thanked the Rapid7 researchers, who first reported the vulnerability in early March after looking into a separate authentication bypass zero-day in Cisco Catalyst SD-WAN Controller (CVE-2026-20127, 10.0) from February. ®
Categories: Linux fréttir

Americans Would Rather Have a Nuclear Plant In Their Backyard Than a Datacenter

Slashdot - Fri, 2026-05-15 11:00
A new Gallup survey found that 71% of Americans oppose having an AI data center built near them, making the facilities even less popular than nearby nuclear plants, which 53% oppose. The Register reports: When it comes to the reasons for opposing AI campuses, half of all respondents cite the effect on resources, with excess water usage and potential power grid constraints topping the list. Concern about loss of farmland and nature was surprisingly low, with just 7 percent mentioning this, but it is possible the scores are higher in rural areas. Quality-of-life concerns such as increased traffic were put forward by nearly a quarter, while a fifth mentioned higher utility bills. Many were worried about AI specifically: that it would replace human workers, that they don't trust it, that it is moving too fast, and that the industry needs regulating. Perhaps the latter sentiment is why President Trump appears to have shifted his own position on the need for AI regulations. Conversely, those in favor of datacenters cite economic benefits, with 55 percent mentioning increased job opportunities, and 13 percent saying it is because of increased tax revenues. [...] This being America in 2026, Gallup looked at how attitudes stack up depending on political affiliation. It found that Democrats, at 56 percent, are much more likely than Republicans to be strongly opposed to a server farm in their vicinity. But 39 percent of Republicans are also strongly opposed, while another 24 percent are somewhat averse to it, and only about a third are in favor. Gallup points out the contradiction: for AI usage to expand in the US, facilities that can handle the necessary computing power will have to be built. But most Americans appear to take a "not in my backyard" attitude to new bit barns, and that attitude has grown in strength.

Read more of this story at Slashdot.

Categories: Linux fréttir

X tells Ofcom it will finally check its moderation inbox

TheRegister - Fri, 2026-05-15 10:52
Britain's media regulator has extracted a set of promises from X over illegal hate speech and terrorist content, suggesting that even "free speech absolutism" eventually meets a compliance department. Under commitments accepted by Ofcom, X said it will review and assess reports of suspected illegal terrorist and hate content from UK users within an average of 24 hours, with at least 85 percent handled within 48 hours through its dedicated UK reporting channel. The company also committed to engaging with external experts on how its reporting systems work, following several organizations' complaints that they were unclear whether reports submitted to X were even being received, let alone acted on. X also said it would withhold access in the UK to accounts operated by or on behalf of terrorist organizations proscribed in Britain if the accounts are reported for posting illegal terrorist content. Ofcom said X will now submit quarterly performance data over a 12-month period so the regulator can monitor whether the company is actually sticking to those promises. "Following intensive engagement carried out by Ofcom's online safety team, X have committed to implementing stronger protections for UK users, which we will now monitor closely," said Oliver Griffiths, Ofcom's Online Safety Group Director. "We have evidence that terrorist content and illegal hate speech is persisting on some of the largest social media sites. We are challenging them to tackle the problem and expect them to take firm action." The regulator launched a compliance investigation in December to examine whether major social media platforms have adequate systems to address illegal hate and terrorist material. Ofcom said evidence gathered alongside organizations including Tech Against Terrorism, Tell MAMA, and the Antisemitism Policy Trust pointed to illegal hate and terror content remaining visible across some of the internet's largest platforms. Ofcom said the issue was of "particular concern" following several recent antisemitic incidents and attacks on Jewish sites in Britain, including attacks in Manchester, Golders Green, and recent arson attempts in London. The watchdog also made clear this is not the end of its scrutiny of X, reminding the platform that Ofcom's separate investigation including issues related to Grok is ongoing and that it will continue to probe X's broader illegal content compliance systems. ®
Categories: Linux fréttir

ZTE showcases at GSMA M360 LATAM 2026, driving future business model restructuring - AI & network two-way integration

TheRegister - Fri, 2026-05-15 10:26
Partner Content ZTE Corporation (0763.HK / 000063.SZ), a global leading provider of integrated information and communication technology solutions, participated in GSMA M360 LATAM 2026. Ms. Chen Zhiping, Chief International Ecosystem Representative of ZTE, delivered a keynote speech entitled "Driving Future Business Model Restructuring — AI & Network Two-Way Integration" at the conference. Ms. Chen provided an in-depth analysis of the industrial value of the two-way integration of AI and networks, sharing ZTE's achievements in the Latin American market over the past two decades, its AI-Native network innovation practices, and its full-scenario intelligent solutions, helping Latin American operators complete their strategic upgrade from "connectivity providers" to "digital economy enablers". Facing the AI industry wave, ZTE released its global strategic vision in 2025: "All in AI, AI for All, Becoming a Leader in Connectivity and Intelligent Computing". Ms. Chen stated that this strategy is highly aligned with the core concepts of this GSMA Summit. In the future, ZTE will move beyond traditional network connectivity services, continuously upgrade its basic network capabilities, and comprehensively expand its AI and intelligent computing business layout. Through a two-way integration model of AI empowering the network and the network supporting AI, ZTE will reconstruct a new business model adapted to the AI era and activate new growth momentum for the Latin American digital economy. In terms of AI-enabled network upgrades, ZTE has pioneered the AI-Native network concept, deeply embedding AI capabilities into all network layers and processes to maximize network efficiency and optimize costs. In the wireless network field, ZTE's new 5G BBU integrates native intelligent computing capabilities, effectively improving the overall efficiency of hardware and software resources and increasing cell throughput by 20%. Simultaneously, by combining Super-N high-performance power amplifiers and AI intelligent optimization technology, equipment energy consumption is reduced by 38%. Currently, AAU and RRU products equipped with this technology have been deployed on a large scale in several Latin American countries, including Chile, Ecuador, Bolivia, Brazil, and Peru, with over 37,000 units deployed to date, saving local operators millions of dollars in electricity costs annually and achieving efficient, green, and intelligent network upgrades. Built upon AI-Native technology, the AIR Net advanced intelligent network solution enables commercial deployment of "autonomous driving" for networks, comprehensively revolutionizing operator operation and maintenance models and reducing overall TCO. This solution has already been commercially deployed in multiple locations globally. Currently, ZTE's intelligent network capabilities have obtained authoritative L4-level certification from the TM Forum, and its self-developed Co-Claw enterprise-level intelligent agent has been fully implemented internally, continuously improving network automation and intelligence levels and helping operators move towards advanced intelligent networks. In response to the complex and diverse network environment in Latin America, ZTE continues to implement scenario-based coverage solutions to bridge the regional digital divide. In indoor scenarios, ZTE has partnered with Chilean company Millicom to deploy the Qcell solution, achieving stable gigabit coverage throughout buildings. In remote rural scenarios, ZTE collaborates with Brazilian company Claro to implement the RuralPilot simplified rural network solution, addressing network coverage challenges in the vast Amazon region with its low cost and ease of maintenance. ZTE also offers a wide range of home coverage solutions, precisely matching the networking needs of different regions and scenarios in Latin America. Ms. Chen Zhiping stated that ZTE will continue to be rooted in the Latin American market, deepen the two-way integration and innovation of AI and networks, and continue to implement green, efficient, and intelligent full-stack ICT solutions to help local operators complete their strategic transformation, upgrade from traditional connectivity service providers to digital economy enablers, comprehensively meet the intelligent needs of industries and families in all scenarios, and work together to build a smart, inclusive, and sustainable new digital ecosystem in Latin America. Contributed by ZTE.
Categories: Linux fréttir

OpenAI caught in TanStack npm supply chain chaos after employee devices compromised

TheRegister - Fri, 2026-05-15 10:08
OpenAI says attackers behind the TanStack npm supply chain compromise stole internal credentials after reaching two employee devices, forcing the company to rotate signing certificates for several desktop products. The company disclosed this week that it had been caught up in the wider "Mini Shai-Hulud" campaign targeting npm ecosystems and developer infrastructure, though it said there was no evidence that customer data, production systems, or deployed software were compromised. OpenAI said the incident happened during a phased rollout of new supply chain security controls introduced after a previous Axios-related incident. According to the company, the two compromised employee devices had not yet received updated package management protections that would have blocked the malicious dependency. The attackers carried out "credential-focused exfiltration activity" against a limited set of internal repositories reachable from the affected employee machines, according to OpenAI. It said "only limited credential material was successfully exfiltrated from these code repositories." That was apparently enough to trigger a precautionary reset across multiple products. OpenAI is rotating the certificates used to sign macOS versions of ChatGPT Desktop, Codex App, Codex CLI, and Atlas, and is requiring users to update the affected software by June 12. The incident ties OpenAI to the increasingly messy supply chain campaign that has spent the past several weeks worming through npm ecosystems, CI/CD infrastructure, and GitHub Actions workflows. Security firm Socket linked the TanStack compromise to the broader "Mini Shai-Hulud" operation, which abused poisoned automation workflows and stolen publishing credentials to push malicious package updates into trusted software pipelines. Researchers tracking the wider Mini Shai-Hulud campaign have connected the activity to a threat group known as TeamPCP, which appears to have developed an unhealthy interest in poisoning npm ecosystems and rifling through developer credentials. TanStack confirmed this week that 84 malicious package versions spanning 42 @tanstack/* packages had been published after attackers compromised parts of its release infrastructure. The poisoned packages were designed largely to steal credentials, including GitHub tokens, cloud secrets, npm credentials, and CI/CD authentication material. The campaign appears linked to earlier Mini Shai-Hulud attacks involving SAP-related npm packages, suggesting the same credential-stealing operation is spreading across multiple developer ecosystems. OpenAI said it is continuing to investigate the incident and monitor for any downstream abuse tied to the stolen credentials. The reassuring news is that OpenAI says no production systems were breached. The less reassuring news is that attackers keep getting deeper into the software assembly line before anybody notices. ®
Categories: Linux fréttir

Pages

Subscribe to www.netserv.is aggregator - Linux fréttir