Linux fréttir

iPhone-Android RCS Conversations Are End-To-End Encrypted In iOS 26.5

Slashdot - Mon, 2026-05-11 19:00
Apple says end-to-end encryption for RCS messages between iPhone and Android is now available in iOS 26.5, though the feature is still considered beta and depends on carrier support on both sides. MacRumors reports: Apple says that it worked with Google to lead a cross-industry effort to add E2EE to RCS. iOS users will need iOS 26.5, while Android users will need the latest version of Google Messages. End-to-end encryption is on by default, and there is a toggle for it in the Messages section of the Settings app. Encrypted messages are denoted with a small lock symbol. On iPhones not running iOS 26.5, RCS messages between iPhone and Android users do not have E2EE, but the new update will put Android to iPhone conversations on par with iPhone to iPhone conversations that are encrypted through iMessage. Along with Google, Apple worked with the GSM Association to implement E2EE for RCS messages. E2EE is part of the RCS Universal Profile 3.0, published with Apple's help and built on the Messaging Layer Security protocol. RCS Universal Profile 3.0 also includes editing and deleting messages, cross-platform Tapback support, and replying to specific messages inline during cross-platform conversations.

Read more of this story at Slashdot.

Categories: Linux fréttir

OpenAI can't have incompetent AI consultants ruining the market, so bought its own

TheRegister - Mon, 2026-05-11 18:33
OpenAI can’t have inexperienced consultants derailing the AI hype train, so it’s launching a consultancy of its own to help enterprises find the value in its models necessary to justify the spending, revenue that Sam Altman's company desperately needs to cover its infrastructure costs. To support the endeavor, OpenAI has agreed to acquire UK-based AI consulting firm Tomoro. The terms of the acquisition weren’t disclosed. Tomoro will form the backbone of the OpenAI Deployment Company, which will operate as a standalone business unit tasked with helping enterprises find the value that they've been missing from the AI flag bearer's models. But don’t worry, McKinsey. OpenAI’s new Forward Deployed Engineers (FDEs) are only there to make sure you don’t sour enterprises on AI by dragging them down an expensive rabbit hole that fails to deliver value. The new company is backed by the usual assortment of AI-crazed venture capitalists and private equity firms, but several consultancies, including Capgemini, Bain, and yep, McKinsey, have agreed to plow billions into the venture. OpenAI says that its AI consultancy will launch with more than $4 billion of investments. Presumably, these consultancies will call in OpenAI’s FDEs when they need help proving AI can boost productivity and/or cut payroll. According to OpenAI, a typical enterprise engagement will look a bit like this: OpenAI’s FDEs will launch a diagnostic to determine where AI can create the most value, then carry out a select set of PoCs. If successful, the FDEs will then design, build, and deploy production systems that tie into enterprises' existing customer data and tools. The experience gained from these integrations will no doubt be used to improve OpenAI’s models and services. The acquisition of Tomoro would bring approximately 150 FDEs and deployment specialists into OpenAI’s new consultancy unit. The deal is expected to close in the coming months, subject to regulatory approvals. Whether enterprises should hitch their saddle to OpenAI’s success at a time when inference providers and model devs are already jacking prices in an effort to get their infrastructure costs under control is another matter entirely. As we reported last week, with the launch of GPT-5.5, OpenAI once again increased its API pricing. For one million tokens, GPT-5.5 is priced at $5 (input), $0.50 (cached input), and $30 (output), double that of its predecessor. But don’t worry, OpenAI says the model might be more frugal about how it uses those tokens. ®
Categories: Linux fréttir

Students Boo Commencement Speaker After She Calls AI the 'Next Industrial Revolution'

Slashdot - Mon, 2026-05-11 18:00
An anonymous reader quotes a report from 404 Media: Speaking to graduates of University of Central Florida's College of Arts and Humanities and Nicholson School of Communication and Media on May 8, commencement speaker Gloria Caulfield, vice president of strategic alliances at Tavistock Group, told graduating humanities students that AI is the "next industrial revolution," and was met with thousands of booing graduates. "And let's face it, change can be daunting. The rise of artificial intelligence is the next industrial revolution," Caulfield said. At that point, murmurs rippled through the crowd. Caulfield paused, and the crowd erupted into boos. "Oh, what happened?" Caulfield said, turning around with her hands out. "Okay, I struck a chord. May I finish?" Someone in the crowd yelled, "AI SUCKS!" Her speech begins around the hour and 15 minute mark in the UCF livestream. [...] Before the industrial revolution comment, Caulfield praised Jeff Bezos for his passion and use of Amazon as a "stepping stone" to his real dream: spaceflight. Rattled after the crowd's reaction, she continued her speech: "Only a few years ago, AI was not a factor in our lives." The crowd cheered. "Okay. We've got a bipolar topic here I see," Caulfield said. "And now AI capabilities are in the palm of our hands." The crowd booed again. "I love it, passion, let's go," she said. "AI is beginning to challenge all major sectors to find their highest and best use," she continued. "Okay, I don't want any giggles when I say this. We have been through this before, these industrial revolutions. In my graduation era, we were faced with the launch of the internet." She goes on to talk about how cellphones used to be the size of briefcases. "At that time we had no idea how any of these technologies would impact the world and our lives. [...] These were some of the same trepidations and concerns we are now facing. But ultimately it was a game changer for global economic development and the proliferation of new businesses that never existed like Apple and Google and Meta and so many others, and not to mention countless job opportunities. So being an optimist here, AI alongside human intelligence has the potential to help us solve some of humanity's greatest problems. Many of you in this graduating class will play a role in making this happen."

Read more of this story at Slashdot.

Categories: Linux fréttir

Debian 14 cracks down on unreproducible packages

TheRegister - Mon, 2026-05-11 17:04
About halfway through the Debian 14 “Forky” development process, its release team announced a new goal: deterministic package compilation. The Debian project’s latest Bits from the release team newsletter has a goal which may not sound very big, but will mean significant extra effort in a direction that could prove to be a valuable extra security measure. "Aided by the efforts of the Reproducible Builds project, we’ve decided it’s time to say that Debian must ship reproducible packages," wrote ReleaseTeam member Paul Gevers. "Since yesterday, we have enabled our migration software to block migration of new packages that can’t be reproduced or existing packages (in testing) that regress in reproducibility." Of the two links in that paragraph, the independent Reproducible Builds project does not, in this vulture’s humble opinion, explain what it’s all about very clearly. We feel that Debian’s own Reproducible Builds wiki page does it better: It should be possible to reproduce, byte for byte, every build of every package in Debian. The Wikipedia article also has a good clear explanation, and introduces a helpful synonym: deterministic compilation. In other words, if you use the same version of the same compiler with the same options, then every time you compile an identical set of source files, the process ought to result in an identical set of binary files. This is starting to become an industry trend – for instance, when we reported on the release of FreeBSD 15 late last year, we noted that it too now promises reproducible builds. Reproducible builds in Debian have been a long time coming: The Register first reported on Debian’s efforts in this direction way back in 2015. It’s not an easy task, but it’s a useful security measure. The idea is to ensure that binaries have not been tampered with – for instance, modified to insert malware. It permits an additional verification step, so that users or automated tools can check whether the binaries they (or their OS package manager) downloaded are byte-identical to the ones they can compile themselves. Without this, you just have to trust the distributor who compiled your OS – as Ubuntu “self-appointed benevolent dictator for life” Mark Shuttleworth pointed out in 2012. (The Internet Archive has a copy of his long-gone blog post.) We also mentioned reproducible builds when we looked at NixOS Raccoon back in 2022, and tried to explain why it was a desirable thing. (Around the same time, Rocky Linux CEO Greg Kurtzer also told us that it was part of the plan for that project, too.) NixOS is already a little further down the reproducibility trail, and as we reported on its add-on Flox deployment tool in 2024, it also aims to deliver reproducible deployments. This won’t directly make Debian safer. It’s already one of the safer and more stable Linux distros there are, anyway. Instead, it’s about infrastructure changes that make it easier to check the supply chain, and to make it possible to write software that can check and verify that what you’re getting really is what you thought that you were getting. If it all works, you won’t be able to tell any difference – but auditing tools will. Debian 13 came out last August, and so Debian 14 is expected in about a year – although it does not have to stick to a rigorous fixed schedule like the commercially-backed projects. ®
Categories: Linux fréttir

Google Says Hackers Used AI To Create Zero Day Security Flaw For the First Time

Slashdot - Mon, 2026-05-11 17:00
Google says it has seen the first evidence of cybercriminals using AI to create a zero-day vulnerability. "Google reported its findings to the unnamed firm affected by the vulnerability before releasing its report," reports Politico. "The company then issued a patch to fix the issue." From the report: Google Threat Intelligence Group researchers detailed the development in a report released Monday. Zero-day exploits are considered the most serious type of security flaw because they are not detected by security companies and have no known fixes. The report noted that this was the first time Google had seen evidence of AI being used to develop these vulnerabilities -- marking a major change in the cybersecurity landscape, as it suggests newer AI models could be used to create major exploits, not just find them. Google concluded that Anthropic's Claude Mythos model -- which has already found thousands of vulnerabilities across every major operating system and web browser -- was most likely not used to create the zero-day exploit. [...] The Google Threat Intelligence Group report also details efforts by Russia-linked hacking groups to use AI models to target Ukrainian networks with malware, while North Korean government hacking group APT45 used AI technologies to refine and scale up its cyber methods. John Hultquist, chief analyst at Google Threat Intelligence Group, said the findings made clear that the race to use AI to find network vulnerabilities has "already begun." "For every zero-day we can trace back to AI, there are probably many more out there," Hultquist said. "Threat actors are using AI to boost the speed, scale, and sophistication of their attacks."

Read more of this story at Slashdot.

Categories: Linux fréttir

Anthropic’s bug-hunting Mythos was greatest marketing stunt ever, says cURL creator

TheRegister - Mon, 2026-05-11 16:30
cURL developer Daniel Stenberg has seen Anthropic’s Mythos, a model the AI biz has suggested is too capable at finding security holes to release publicly, scan his popular open source project. But after the system turned up just a single vulnerability, he concluded the hype around Mythos was “primarily marketing” rather than a major AI security breakthrough. Stenberg explained in a Monday blog post that he was promised access to Anthropic’s Mythos model - sort of - through the AI biz’s Project Glasswing program. Part of Glasswing involves giving high-profile open source projects access via the Linux Foundation, but while Stenberg signed up to try Mythos, he said he never actually received direct access to the model. Instead, someone else with access ran Mythos against curl’s codebase and later sent him a report. “It’s not that I would have a lot of time to explore lots of different prompts and doing deep dive adventures anyway,” Stenberg explained. “Getting the tool to generate a first proper scan and analysis would be great, whoever did it.” That scan, which analyzed curl’s git repository at a recent master-branch commit, was sent back to him earlier this month, and it found just five things that it claimed were “confirmed security vulnerabilities” in cURL. Saying he had expected an extensive list of vulnerabilities, Stenberg wrote that the report “felt like nothing,” and that feeling was further validated by a review of Mythos’ findings. “Once my curl security team fellows and I had poked on this short list for a number of hours and dug into the details, we had trimmed the list down and were left with one confirmed vulnerability,” Stenberg said, bringing us back to the aforementioned number. As for the other four, three turned out to be false positives that pointed out cURL shortcomings already noted in API documentation, while the team deemed the fourth to be just a simple bug. “The single confirmed vulnerability is going to end up a severity low CVE planned to get published in sync with our pending next curl release 8.21.0 in late June,” the cURL meister noted. “The flaw is not going to make anyone grasp for breath.” That said, Mythos did find several other non-security bugs that Stenberg said the team is working on fixing, and he notes that their description and explanation were well done. Mythos can do good work, in other words, but it’s not a ground-breaking, game-changing AI model like Anthropic has claimed. “My personal conclusion can however not end up with anything else than that the big hype around this model so far was primarily marketing,” Stenberg said in the blog post. “I see no evidence that this setup finds issues to any particular higher or more advanced degree than the other tools have done before Mythos.” cURL code is no stranger to AI To say cURL has become widely used in its nearly three decades of existence would be an understatement. Its wide reach has meant that its team has been running it through all sorts of static code analyzers and fuzz testing it since well before the dawn of the AI age. With AI’s rise, the cURL team has adapted, meaning Mythos is hardly the first AI to get its fingers on cURL’s codebase. “These tools and the analyses they have done have triggered somewhere between two and three hundred bugfixes merged in curl through-out the recent 8-10 months or so,” Stenberg said of tools like AISLE, Zeropath, and OpenAI Codex Security that’ve tested cURL code. “A bunch of the findings these AI tools reported were confirmed vulnerabilities and have been published as CVEs. Probably a dozen or more.” Stenberg’s experience with AI testing cURL, in other words, makes it a great candidate to see how effective Mythos can really be at finding more than the average AI. As Stenberg noted elsewhere in his blog post, Mythos isn’t doing anything particularly novel when it comes to security discoveries: It might be a bit better at finding things than previous models, but “it is not better to a degree that seems to make a significant dent in code analyzing,” the cURL author noted. Stenberg isn’t an AI doomer when it comes to its ability to improve software design, though. Yes, he may have closed the cURL bug bounty earlier this year due to an influx of sloppy, useless bug reports, but he also noted a few months prior to the bounty closure that some security researchers assisted by AI have made valuable reports. “AI powered code analyzers are significantly better at finding security flaws and mistakes in source code than any traditional code analyzers did in the past,” Stenberg said, adding an important qualifier for the Mythos moment: “All modern AI models are good at this now.” Mythos isn’t any more creative than its creators Both older AI models and security-focused tools like Mythos have a common limitation, as far as Stenberg is concerned: They’re only as good at finding security vulnerabilities as the humans who programmed them. “AI tools find the usual and established kind of errors we already know about. It just finds new instances of them,” Stenberg said. “We have not seen any AI so far report a vulnerability that would somehow be of a novel kind or something totally new.” As for Mythos, Stenberg remains unimpressed, calling it "an amazingly successful marketing stunt for sure" in his blog post. In an email to The Register, Stenberg admitted that it’d be possible for AI models to actually discover new, novel types of vulnerabilities, but he’s still not convinced that they can go beyond what humans are capable of finding, given that they’re limited by our understanding of how software vulnerabilities work. At the end of the day, Stenberg explained, when we talk about security, we’re only talking about code. “Source code is text and it feels like maybe we already know about most ways we can do security problems in it,” he pondered in his email. In other words, like the valuable AI-assisted reports made to the cURL bug bounty program before its closure due to a flood of AI garbage, making valuable use of systems like Mythos is going to require humans to get creative. Sorry, no foisting your critical thinking onto a bot. “Human researchers have always used tools when they look for security problems,” Stenberg told us. “Adding AIs to the mix gives the humans even more powerful tools to use, more ways to find problems. I expect that many security bugs going forward will be found by humans coming up with new ways and angles of prompting the AIs.” Stenberg said that he hopes he’ll actually get his hands on Mythos so he can experiment with its capabilities, but he doesn’t seem to be holding out hope the promised access will materialize. “I have been promised access and for all I know I will eventually get it,” Stenberg told us. “I just don't know when.” ®
Categories: Linux fréttir

Apple Now Requires Verification For Education Store

Slashdot - Mon, 2026-05-11 16:00
Apple now requires Education Store shoppers in the U.S. and several other countries to verify their student, educator, parent, or homeschool-teacher status through UNiDAYS, ending the previous honor-system approach. 9to5Mac reports: Starting today, Apple requires shoppers in the United States to complete verification when making a purchase via the Education Store. This change also applies to Australia, Hong Kong, Turkey, Canada, and Chile. In many other markets around the world, such as the UK, Apple already required verification. As a refresher, people eligible for Apple's Education Store include current and newly accepted college students and their parents, as well as faculty, staff, and homeschool teachers across all grade levels. Apple is teaming up with UNiDAYS to handle the verification process. Students and educators will be asked to create a UNiDAYS ID and then verify their academic status by logging in to their school's academic portal. Alternatively, users can upload a photo of their student or faculty IDs. Homeschool teachers, meanwhile, will need to provide an identity document such as a driver's license, state ID card, or passport. They'll also need to provide one homeschool document, such as a Letter of Intent (LOI) or Letter of Acknowledgment. Most customers will be verified instantly, and those requiring manual verification should hear back within 24 hours. The same verification process applies both in-store and online for Apple Education Store shoppers. Meanwhile, Apple has added Apple Watch to the Education Store for the first time, offering discounts on the Series 11, SE 3, and Ultra 3.

Read more of this story at Slashdot.

Categories: Linux fréttir

Anthropic Says 'Evil' Portrayals of AI Were Responsible For Claude's Blackmail Attempts

Slashdot - Mon, 2026-05-11 15:00
An anonymous reader quotes a report from TechCrunch: Fictional portrayals of artificial intelligence can have a real effect on AI models, according to Anthropic. Last year, the company said that during pre-release tests involving a fictional company, Claude Opus 4 would often try to blackmail engineers to avoid being replaced by another system. Anthropic later published research suggesting that models from other companies had similar issues with "agentic misalignment." Apparently Anthropic has done more work around that behavior, claiming in a post on X, "We believe the original source of the behavior was internet text that portrays AI as evil and interested in self-preservation." The company went into more detail in a blog post stating that since Claude Haiku 4.5, Anthropic's models "never engage in blackmail [during testing], where previous models would sometimes do so up to 96% of the time." What accounts for the difference? The company said it found that training on "documents about Claude's constitution and fictional stories about AIs behaving admirably improve alignment." Related, Anthropic said that it found training to be more effective when it includes "the principles underlying aligned behavior" and not just "demonstrations of aligned behavior alone." "Doing both together appears to be the most effective strategy," the company said.

Read more of this story at Slashdot.

Categories: Linux fréttir

Gtk2-NG, next generation of Gtk 2, comes back to life

TheRegister - Mon, 2026-05-11 14:54
An effort to revive and reinvigorate the 2002 Gtk2 GUI programming toolkit is growing and gaining interest… as we predicted would happen a few months ago. The gtk2-ng project is reviving and modernizing Gtk2 version 2, which the GNOME developers declared dead back in 2020. We held off on reporting this for a while to see if the idea would gain some support, and it does seem to winning interest and followers. Reviving a 24-year-old toolkit that reached its official end-of-life six years ago is a retrospective sort of undertaking, and as such, it appeals to some modern-but-nostalgic development projects. Development is hosted on the Git instance of the Devuan project, the systemd-free fork of Debian. (Last year, Devuan announced its support of Xlibre, the X.org fork that aims to re-invigorate X11 development.) However, developer Daemonratte announced the fork in a thread on the forums of the Pale Moon browser: GTK2 revival. Pale Moon, as we described in 2021, is a continuing fork of an early version of Firefox. Back in February, when we covered the news that Debian 14 planned to drop Gtk2, we mentioned that this might provide the impetus for a fork. This isn’t the first such fork, and we mentioned then that the Ardour digital audio workstation we last looked at in 2022 maintains its own internal version called YTK. Daemonratte says that they’ve already incorporated some fixes from that, and also from an earlier fork by stefan11111 which has been inactive for a couple of years. They then outline the current goals: Current status: Making it Y2K38-safe Getting rid of all deprecation warnings Patching it for NetBSD and backporting NetBSD-specific patches Testing it on all kinds of hardware Further modernization without breaking ABI Future plans: Implement touch support and smooth scrolling from Ardour’s ytk without breaking ABI, so Ardour can be compiled against gtk2 again Heavily lobby for its adoption in the BSD and systemdfree Linux world Reimplement GtkMozEmbed for UXP, so this wonderful engine can be used in gtk2 projects Gtk originally stood for GIMP Tool Kit: 30 years ago, when the GIMP image editor made its public début, Gtk was the set of tools GIMP’s authors created to make it easier to write GUI apps in C. Six years later, GTK+ 2.0.0 appeared. The new plus symbol in its name represented a new object-oriented design. When Miguel de Icaza announced the GNOME desktop project in 1997, it adopted Gtk instead of the then-semi-commercial Qt that KDE used. Since then, Gtk has been developed along with GNOME. GIMP development is relatively slow: the team finally released version 3.0 a year ago, and it uses Gtk 3. (Last month, it released version 3.2.4.) Since launch, though, the GNOME project has released 39 numbered versions, and in recent decades Gtk has kept pace with GNOME, not GIMP. The last version of Gtk 2 was GTK+ 2.24.0 in 2012. The GNOME developers officially said it was end-of-life with the release of Gtk 4 in 2020. Gtk2-ng is far from the only project to fork and revive an older version of a project which has since been superseded by newer versions from the original team. One of the obvious ones is the MATE desktop, which Argentinian developer Perberos announced in 2011. Saying that, though, Daemonratte stated: "The ultimate vision of this fork is to keep gtk2 alive for software using it right now and to revive gtk2 versions of […] Gnome2 […]. Yes, I don’t have to do this alone and no, Mate is not an option, because they use gtk3 now." It is very much not alone. We have been covering releases of KDE 3 fork the Trinity desktop environment since version 14.0.11 in 2021. This vulture used KDE 1.x back when it was the state of the Linux art, and for us, KDE 3.x was already too big and complicated. For the KDE project’s 20th anniversary in 2016, Brazilian developer Helio Chissini de Castro modernized KDE 1 so that it would build and run on Fedora 25. We didn’t realize this had become an ongoing effort, but it has. From later in the Gtk-ng thread, we learned about MiDesktop, a continuing project based on Osiris, a modernized Qt 2. ®
Categories: Linux fréttir

BWH Hotels guests warned after reservation data checks out with cybercrooks

TheRegister - Mon, 2026-05-11 14:34
BWH Hotels is informing customers about a third-party data breach that gave cybercriminals access to six months' worth of data. The notification email stated that BWH Hotels, which owns the WorldHotels, Best Western Hotels & Resorts, and Sure Hotels brands, identified the intrusion on April 22, but the affected data goes back to October 14, 2025. BWH Hotels CTO Bill Ryan, who penned the notification email, said names, email addresses, telephone numbers, and/or home addresses belonging to "certain guests" were accessed by an unauthorized third party. The intruders also accessed reservation details, such as reservation numbers, dates of stay, and any special requests. It confirmed that the attack targeted one of its "web applications that houses certain guest reservation data." No payment or bank details were involved. The Register asked BWH Hotels whether the intrusion began in October and went undetected until April, or whether a later breach exposed data dating back to October. We also asked if this was related to information we were sent in March about BWH Hotel customer booking data being stolen and used for phishing campaigns. At the time, the company neither confirmed nor denied the information seen by The Register. BWH Hotels did not immediately respond to our request for comment on Monday. "Upon discovering the incident, we immediately took the application offline and revoked the unauthorized access," said Ryan. "We have engaged leading external cybersecurity experts to support our incident response efforts and to assist with the further strengthening of existing safeguards." "We advise guests to be extra vigilant when viewing any unexpected or suspicious communications about hotel stays. If you receive a suspicious communication such as an unexpected email, text, WhatsApp message, or telephone call that asks for payment, codes, logins, or 'verification,' even if they reference a BWH Hotels property or an upcoming reservation, do not engage. Navigate to sites directly rather than clicking links." ®
Categories: Linux fréttir

Feature freeze for Python 3.15 as first beta released

TheRegister - Mon, 2026-05-11 14:14
The Python team has released the first beta of version 3.15, with new features including a stable application binary interface (ABI) for free-threaded CPython, lazy imports to speed startup time, a new zero-overhead sampling profiler, use of UTF-8 text encoding by default, and a faster just-in-time (JIT) compiler. Python's development cycle bars new features after the first beta release. There is typically a new feature release in October each year, with version 3.15 currently scheduled for October 1. The option to remove the global interpreter lock (GIL), available in Python 3.14, was the biggest change to Python for years, enabling efficient concurrency on multi-core CPUs. The new stable ABI means that C extensions can now be compiled for multiple minor versions of free-threaded builds, though the team warns that doing so means only a subset of the full CPython API is available. The existing stable ABI remains available, and it is possible to compile for both. Extension maintainers will benefit, since building new versions for every minor Python release is a burden. Explicit lazy imports can improve startup time for Python applications by deferring module loading until it is first accessed. Otherwise, an imported module is loaded and compiled to bytecode immediately - though developers could use workarounds at the expense of code readability. The solution for this is a new keyword: lazy import json A new sampling profiler called Tachyon works by capturing stack traces from running processes, instead of instrumenting function calls. According to the docs, the approach "provides virtually zero overhead while achieving sampling rates of up to 1,000,000 Hz" and can be used to debug performance issues in production. Text encoding in Python 3.15 is now UTF-8 by default, though explicit encoding is still recommended for best compatibility. CPython is the reference implementation of Python, and improving its performance has long been a focus. An experimental JIT compiler was introduced in version 3.14, though not recommended for production use - and could make code run more slowly. In 3.15, the JIT compiler is much improved, and the team now reports an 8-9 percent mean performance improvement over the CPython interpreter on x86-64 Linux, and 12-13 percent on Apple silicon macOS, though some code may still run up to 15 percent slower. These figures may change before the final release. In contrast, the incremental garbage collector released in 3.14 has been reverted, following reports of memory leaks. This aimed to improve performance by reclaiming memory less frequently. It was removed in Python 3.14.5 and the core team stated: "If we want to reintroduce the incremental GC for 3.16, it can go through the regular PEP process and be more thoroughly evaluated." The full list of what is new in 3.15 is documented here.®
Categories: Linux fréttir

Google says criminals used AI-built zero-day in planned mass hack spree

TheRegister - Mon, 2026-05-11 13:38
Google says crooks already have AI cooking up zero-days, and claims one nearly escaped into the wild before the company stopped it. In a report shared with The Register ahead of publication on Monday, Google’s Threat Intelligence Group said that it has identified what it believes is the first real-world case of cyber-baddies using AI to discover and weaponize a zero-day vulnerability in a planned mass-exploitation campaign. The bug, a two-factor authentication bypass in a popular open source web-based administration platform, was reportedly developed by criminals working together on a large-scale intrusion operation. GTIG said that the attackers appear to have used an AI model to both identify the flaw and help turn it into a usable exploit. Google worked with the unnamed vendor to quietly patch the issue before the campaign could properly kick off, which it believes may have disrupted the operation before it gained traction. The company insists that neither Gemini nor Anthropic’s Mythos was involved, but said that the exploit itself looked suspiciously machine-made. According to the report, the Python script included what Google described as "educational docstrings," a hallucinated CVSS score, and a polished textbook coding structure that looked heavily influenced by LLM training data. Google said that the issue stemmed from developers hard-coding a trust exception into the authentication flow, creating a hole that attackers could exploit to sidestep 2FA checks. According to the firm, those higher-level logic mistakes are exactly the kind of thing modern AI models are starting to get surprisingly good at finding. "While fuzzers and static analysis tools are optimized to detect sinks and crashes, frontier LLMs excel at identifying these types of high-level flaws and hardcoded static anomalies," the report said. John Hultquist, chief analyst at Google Threat Intelligence Group, said anyone still treating AI-assisted vulnerability discovery as a future problem is already behind. "There’s a misconception that the AI vulnerability race is imminent. The reality is that it’s already begun. For every zero-day we can trace back to AI, there are probably many more out there," Hultquist said. "Threat actors are using AI to boost the speed, scale, and sophistication of their attacks. It enables them to test their operations, persist against targets, build better malware, and make many other improvements. State actors are taking advantage of this technology but the criminal threat shouldn’t be underestimated, especially given their history of broad, aggressive attacks." Google’s report suggests that the zero-day case is part of something much bigger. GTIG said North Korean crew APT45 had been using AI to churn through thousands of exploit checks and bulk out its toolkit, while Chinese state-linked operators were experimenting with AI systems for vulnerability hunting and automated probing of targets. Google also described malware families padded out with AI-generated junk code designed to confuse analysts, Android backdoors using Gemini APIs to autonomously navigate infected devices, and Russian influence operations stitching fabricated AI-generated audio into legitimate news footage. The awkward bit for everyone else is that this still appears to be the clumsy early phase. Google said mistakes in the exploit’s implementation probably interfered with the criminals’ plans this time around, but that may not stay true for long. ®
Categories: Linux fréttir

SoftBank bets on battery building to back bit barns

TheRegister - Mon, 2026-05-11 13:37
SoftBank is getting into the datacenter battery business and plans to start manufacturing them on the scale of gigawatt-hours per year of capacity to support the power needs of AI infrastructure, including its own. The Japan-based tech investment biz says it aims to deploy the battery systems it is developing at its own large-scale AI server farms initially, but plans to make them more widely available in future. It hopes to begin mass production in financial year 2027, and expects the operation to generate revenue of ¥100 billion (over $600 million) per year by 2030. SoftBank is working with two South Korean firms that have a track record in advanced battery-related technologies. One is Cosmos Lab, developer of zinc-halogen batteries that use pure water as an electrolyte, making them non-flammable, and the other is DeltaX, which designs and manufactures battery-based energy storage systems (BESS). Reg readers may recall that SoftBank last year bought the rights to a former Sharp LCD panel factory in Sakai City, Osaka prefecture in Japan, and said it planned to convert it into a datacenter to operate AI agents developed jointly with ChatGPT creator OpenAI. The site will now become an industrial cluster, home to its battery manufacturing facility as well. SoftBank referred to it as a core hub to establish its AX Factory (a center for datacenter operations and AI infrastructure hardware manufacturing), and GX Factory (serving as a manufacturing facility for next-gen batteries, solar panels, and related products). One detail missing is how much cash the investment biz is pouring into this venture. We asked how much the project is costing to get off the ground, but a SoftBank spokesperson told us it was not able to comment. SoftBank plans to start by deploying the battery systems produced at its GX Factory in its own server halls, but will then provide them for grid applications in Japan, plus factories and other industrial uses. It hopes to take the technology into global markets over the medium term. In presentation slides seen by The Register, the firm says BESS for commercial and industrial use will have a capacity of 140 kWh to 560 kWh, while those for large-scale or grid-scale use will come in at 2,240 kWh to 5,380 kWH. According to SoftBank, DeltaX has developed BESS capable of energy densities exceeding 5 MWh in a standard commercial container format (a 20-foot shipping container). The way DeltaX packs together and connects the battery cells in its BESS maximizes their performance, Softbank claims, and by applying these technologies to next-generation battery cells (presumably referring to those of Cosmos Lab), further improvements in energy storage can be achieved. Those battery cells, which SoftBank calls Innovative batteries, use a halogen-based material for the cathode and zinc for the anode, which it says offers charge-discharge characteristics with minimal energy loss and energy efficiency comparable to existing lithium-ion batteries. As they use pure water as the electrolyte, SoftBank claims these batteries are inherently safer and won't catch fire, unlike lithium-ion batteries, which have a well-documented tendency to do exactly that. SoftBank has its finger in a number of pies when it comes to AI projects. The firm was aiming to pump $22.5 billion into LLM developer OpenAI before the end of 2025, and more recently announced plans for a massive 10 GW datacenter campus on US Department of Energy (DoE) land in Ohio. The company is also majority shareholder of chip designer Arm, which recently revealed its first Arm-branded datacenter processor targeting AI, and owns Ampere Computing, which makes Arm-based server chips. ®
Categories: Linux fréttir

Water company's leaky security earns near-£1M fine

TheRegister - Mon, 2026-05-11 12:52
The UK's data protection watchdog has fined South Staffordshire Water's parent company nearly £1 million over security failings exposed by the Cl0p ransomware attack in 2022. Issuing the fine of £963,900 ($1.3 million), the Information Commissioner's Office (ICO) said the attack exposed "significant failures in the company's approach to data security." The attack, claimed by Cl0p, was detected in July 2022 after engineers responded to performance issues, but a thorough postmortem revealed the initial intrusion occurred almost two years earlier, in September 2020. Among the key failures that led to the attack, and the nearly two-year delay in detecting it, were: Limited controls, which allowed the attacker to escalate their privileges to admin after gaining an initial foothold on the network Inadequate monitoring and logging. The ICO noted that only 5 percent of South Staffordshire's IT environment was being monitored Running unsupported software, including Windows Server 2003 Poor vulnerability management. Investigations showed critical systems were unpatched against known vulnerabilities, and the company failed to regularly run internal or external security scans The ICO said 633,887 people were affected by the attack and the resulting leak of company files. For customers, this included personally identifiable information, usernames and passwords used to access its online services, and bank account numbers and sort codes. For a limited number of customers on the utility company's Priority Services Register, the stolen information could have led to their disabilities being inferred. Cl0p also pilfered HR information, including employees' National Insurance numbers. The trove of company data was later leaked online in a file exceeding 4 TB. At the time of the attack, South Staffordshire handled the data of some 1.85 million individuals. Most of these were either current or former customers, but several thousand staffers' details were also retained. "Customers do not have the choice over which water company serves them – they are required to share their personal information and place their trust in that provider," said Ian Hulme, interim executive director for regulatory supervision at the ICO. "It is therefore essential that water companies honor that trust by taking their data protection responsibilities seriously." "The steps that South Staffordshire failed to take are established, widely understood and effective controls to protect computer networks. The ICO expects all organizations – and particularly those handling large volumes of personal information as part of critical national infrastructure – to have these in place." "Waiting for performance issues or a ransom note to discover a breach is not acceptable. Proactive security is a legal requirement, not an optional extra." The ICO announced its intent to fine South Staffordshire in December 2025. The regulator said after reviewing the company's representations, which included agreement with its findings and an early admission of wrongdoing, it reduced the fine by 40 percent. "We accept the Information Commissioner's Office's decision relating to the cyberattack our Group experienced in 2022, and are sorry for the worry and concern it caused for customers and employees," said Charley Maher, group CEO at South Staffordshire Plc, in a statement provided to The Register. "We took immediate action to contain the incident, support those impacted, and reduce the risk of recurrence." "We have invested significantly to further strengthen our cybersecurity resilience, governance, and monitoring, and we continue to enhance our capabilities as the threat landscape evolves. Protecting customer and employee information is a responsibility we take extremely seriously, and we remain focused on learning from this incident and maintaining strong safeguards across the Group." ®
Categories: Linux fréttir

Checkmarx tackles another TeamPCP intrusion as Jenkins plugin sabotaged

TheRegister - Mon, 2026-05-11 12:11
Checkmarx’s software engineers are still working to remove a malicious version of the code security outfit's Jenkins plugin after detecting an unauthorized upload over the weekend. It updated customers on Saturday, May 9, after discovering a version of its AST Scanner, which is used for security scans in Jenkins CI pipelines, was made available via the Jenkins Marketplace. “We are aware that a modified version of the Checkmarx Jenkins AST plugin was published to the Jenkins Marketplace,” it said in a statement. “We are in the process of publishing a new version of this plug-in.” Versions published as of May 9, 2026, should not be trusted, it added, before urging all users to check they’re running the correct release (2.0.13-829.vc72453fa_1c16) published on December 17, 2025. Installed by several hundred controllers, the plugin remains available at the time of writing, and appears as the most recently available version, although pull requests actioned on Monday morning suggest this will soon be pulled down. “What makes this particularly dangerous for Jenkins users is the trust model at play,” said SOCRadar in its coverage. “The Checkmarx Jenkins plugin is a tool people install specifically to improve the security of their pipelines. “A backdoored version doesn’t just compromise one project; it rides trusted infrastructure into every build pipeline it touches, with access to source code, environment variables, tokens, and whatever secrets the runner can see.” Security engineer Adnan Khan spotted the compromise quickly over the weekend. The crew behind the early supply chain attack affecting Checkmarx in April, TeamPCP, defaced the company’s GitHub and published six packages, each with a description alluding to the Shai-Hulud wormable malware. These packages no longer appear on Checkmarx’s GitHub, but TeamPCP made multiple changes to the AST plugins page, renaming it to “Checkmarx-Fully-Hacked-by-TeamPCP-and-Their-Customers-Should-Cancel-Now,” and altering the description to claim CheckMarx failed to rotate its secrets. The latest infiltration of Checkmarx’s internals marks the third time TeamPCP has compromised the company’s packages in as many months. As previously seen in The Register, the crooks successfully targeted Checkmarx’s AST plugin for GitHub Actions and its KICS static analysis tool back in March, deploying credential-stealing malware. SOCRadar said the latest TeamPCP compromise of the Jenkins plugin suggests that either TeamPCP was telling the truth about Checkmarx’s secrets rotation, or its members took advantage of an additional persistence mechanism that the security vendor failed to notice during its response to the March intrusion. ®
Categories: Linux fréttir

NASA's bid to save Swift from fiery death passes another hurdle

TheRegister - Mon, 2026-05-11 11:52
A rescue mission for NASA's Neil Gehrels Swift Observatory has taken another step forward following the completion of environmental tests at the agency's Goddard Space Flight Center. The purpose of the tests was to assess how the LINK robotic servicing spacecraft, supplied by Katalyst Space Technologies, would withstand the forces of launch and the extremes of the orbital environment. The mission is ambitious and fast-paced. It was only in August 2025 that NASA asked US industry for ideas on rescuing the observatory, whose orbit is decaying faster than expected. Katalyst was awarded the contract and has been working against the clock to launch its servicing spacecraft before Swift reaches the point of no return. In February 2026, NASA ended most science operations aboard Swift to keep the spacecraft in orbit long enough for the rescue mission. At the time, June 2026 was Katalyst's expected launch date and, thanks to the successful completion of testing, the mission remains on track. The next step is for Northrop Grumman to integrate LINK into its Pegasus rocket in early June, with launch planned from the last airworthy L-1011 TriStar (dubbed Stargazer) later that month. The LINK spacecraft has undergone vibration testing to simulate a Pegasus launch and thermal-vacuum testing in Goddard's Space Environment Simulator, where it experienced space-like hot and cold temperature extremes. The team also test-fired the spacecraft's three xenon-powered ion thrusters and deployed one of its robotic arms. Kieran Wilson, LINK's principal investigator at Katalyst, said: "We're in an unusual situation where the schedule dictates how much risk we’re willing to accept, rather than the other way around. "The clock is ticking on Swift's descent, so we have to find a balance between testing and problem solving that gives the mission the best chance of success." After paying tribute to the speed at which Katalyst was moving, Swift mission director John Van Eepoel said: "Swift will likely re-enter the atmosphere sometime later this year if we don't attempt to lift it to a higher altitude." In this instance, the Swift observatory has nothing to lose and everything to gain from the reboost mission. The spacecraft is more than 20 years into a two-year task to study gamma-ray bursts. If it weren't for its decaying orbit (and the Trump administration's effort to terminate it - the mission was on the chopping block in the FY2026 budget proposal - it could continue observations for years to come. ®
Categories: Linux fréttir

Linux Kernel Starts Retiring Support for AMD's 30-Year-Old K5 CPUs

Slashdot - Mon, 2026-05-11 11:34
Linux 7.1 started phasing out support for Intel's 37-year-old i486 processor. Linux 7.2 removed drivers for the old AMD Elan 32-bit systems on a chip. And now some i586 and i686 class processors are being removed, reports Phoronix: Supporting those vintage GPUs without the Time Stamp Counter "TSC" instruction are becoming a burden... TSC-capable Intel Pentium processors and the likes will still be supported with this just being for TSC-less i586/i686 CPUs. Among the CPUs impacted by this latest change is the AMD K5 as well as various Cyrix processor models. The K5 was AMD's first entirely in-house designed processor that was first introduced in 1996 to counter the Intel Pentium CPU. TSC "support can now be assumed as a boot requirement for modern Linux," the article points out, which will allow the removal of various non-TSC code paths from the Linux kernel's x86 code. Tom's Hardware remembers the K5 "wasn't a very popular processor as it arrived late, then offered lackluster performance in the competitive environment it joined." Launch SKUs in 1996 were limited to clocks from 75 MHz to 133 MHz, and, due to being late, Intel's Pentium line was already faster. AMD still managed to get an edge on the Cyrix 6x86, though.

Read more of this story at Slashdot.

Categories: Linux fréttir

Linux kernel maintainers pitch emergency killswitch after CopyFail and Dirty Frag chaos

TheRegister - Mon, 2026-05-11 11:16
Linux kernel maintainers are considering giving admins a giant red emergency button to smash the next time another nasty vulnerability drops before patches are ready. The proposed feature, named "Killswitch," would let admins temporarily disable specific vulnerable kernel functions at runtime instead of sitting around waiting for fixes. The so-called patch was submitted by Linux stable kernel co-maintainer and Nvidia engineer Sasha Levin after a bruising couple of weeks for Linux security. The proposal basically gives admins a way to pull the plug on vulnerable kernel functionality. If exploit code starts spreading before patches arrive, the targeted function can be disabled so calls to it immediately fail instead of reaching the vulnerable code. "When a (security) issue goes public, fleets stay exposed until a patched kernel is built, distributed, and rebooted into," Levin wrote. "For many such issues the simplest mitigation is to stop calling the buggy function. Killswitch provides that." The past couple of weeks have not exactly been great advertising for the traditional "wait for patches" approach. First we saw the disclosure of CopyFail, a Linux local privilege escalation bug that quickly moved from disclosure to active exploitation. Days later, Dirty Frag emerged: another Linux privilege escalation flaw with public exploit code and no official fixes, after coordinated disclosure efforts fell apart before patches were ready. As Levin's proposal itself puts it, organizations are often left exposed "until a patched kernel is built, distributed, and rebooted into." Killswitch aims to fill that gap. Killswitch would work through the kernel's security interface and is mainly intended for subsystems that systems can survive without for a while. In practical terms, Levin's argument is that temporarily losing some networking or crypto functionality is preferable to leaving known vulnerable code exposed on production systems. However, the feature would not fix vulnerable code or replace it with safe code. It just slams the door shut on the dangerous bit until administrators can properly update their kernels. Naturally, handing sysadmins the ability to selectively shoot pieces of the kernel in the head has already sparked debate among developers over stability, potential for abuse, and whether people can be trusted not to accidentally saw off important limbs in production. Still, after CopyFail and Dirty Frag, the kernel community increasingly seems to be arriving at the conclusion that running broken functionality may now be preferable to running weaponized functionality. ®
Categories: Linux fréttir

Classic Outlook's Quick Steps trip over Microsoft bug

TheRegister - Mon, 2026-05-11 10:28
If you're using Quick Steps in Microsoft Outlook and wondering why they're grayed out, a bug introduced in version 2512 is the culprit. Classic Outlook is approaching the twilight years of its prodigiously long life, but users can still fall victim to productivity-killing bugs – in this case, a problem with Quick Steps. Quick Steps automates common or repetitive tasks in Outlook. Always have to move a bunch of messages to a specific folder? Quick Steps is your friend. Pin an email and mark it as unread? Again, the actions can be lined up in Quick Steps and executed with a single click or a keyboard shortcut. Until Microsoft breaks it. In a support article, Microsoft has confirmed that in some situations, Quick Steps in classic Outlook can appear grayed out. The workaround (if rolling back or switching clients isn't an option) is to use a keyboard shortcut. "The shortcut will work even if the Quick Step is grayed out in the user interface," Microsoft wrote. The problem is that if a Quick Step contains actions that "can't be fulfilled," it's grayed out. Microsoft's own the example states: "A Quick Step that moves a message to a folder and clears categories will be grayed out in messages where there are no categories applied." "This is known to happen with Quick Steps with Flags and Categories actions such as 'Clear flags on message' or 'Clear categories'." Classic Outlook has suffered several glitches of late. Microsoft admitted in April that it could occasionally chow down on system resources for no obvious reason. Then there was its tendency to explode when opening too many emails. Microsoft has been clear that Classic Outlook's days are numbered. Outlook 2024 is due to drop out of mainstream support in 2029. However, there remains much that Classic Outlook does which New Outlook doesn't, such as COM support. And, when Microsoft hasn’t broken them, Quick Steps. ®
Categories: Linux fréttir

Europe wants out from under US tech – but first it has to find the exits

TheRegister - Mon, 2026-05-11 10:00
In late December, US Secretary of State Marco Rubio sanctioned former European Commission tech chief Thierry Breton for his role in leading "organized efforts to coerce American platforms to censor, demonetize, and suppress American viewpoints they oppose." The architect of the EU's Digital Services Act (DSA) – a pet hate of the Trump administration – has yet to be deterred. Last month, he joined a chorus of calls for Europe to end its reliance on dominant US tech companies. "The time for an apologetic Europe is over," the former Atos CEO said in a rallying cry that points out we now live in a world "where digital sovereignty has become one of the central arenas of power politics." But what to do about it? US companies hold overwhelming positions in markets including cloud infrastructure and personal productivity tools, to say the least. Breton says Europe has a "constellation of [tech] players that, together, form a considerable base," but offers little explanation of how it might extract itself from the incumbent providers and what the new world might look like. One of his compatriots has, though. Nicolas Roux, systems engineer at French aerospace research lab ONERA, has put together a comprehensive analysis in an attempt to understand which systems might fail first under the kind of pressure the US has already exercised on European institutions and individuals. It also looks at how long they would take to recover and how Europe can reduce its exposure, and which levers – organizational, sectoral, or political – it should pull to ensure better digital sovereignty. The 137-page report is designed for Europe's decision-makers on tech and policy. The details are too numerous to summarize, but it offers a glimpse of some worst-case scenarios as well as cause for optimism. As the report points out, a sense of urgency has gripped European institutions following US sanctions on International Criminal Court (ICC) prosecutor Karim Khan, which led to his Microsoft services being suspended. Microsoft denied responsibility, saying it was the ICC's decision. The Dutch press later reported that the decision was made under duress after Microsoft pointed out that its obligations under the sanctions meant it would have to cut off service to the entire organization unless the ICC removed Khan's access. In March, Henna Virkkunen, Executive Vice-President of the European Commission with responsibility for technological sovereignty, said that Europe's dependence on American technology had become a security concern visible beyond specialist circles. There are so many layers of technology in which the US dominates, with so many interdependencies, any effective move toward digital sovereignty should be based on an understanding of which are the most vulnerable and which are hardest to replace. Roux zeros in on Identity and Access Management (IAM). The US dominates enterprise deployments with few exceptions. "Microsoft, Ping Identity, and IBM as the market's leading operators, with Okta, Oracle Identity Governance, and CyberArk accounting for the majority of remaining enterprise contracts," the report says. "No European vendor appears in any tier of the competitive landscape. For European public administrations, this means that the layer of infrastructure responsible for authenticating every user and authorising every access decision is, in most cases, operated by a vendor incorporated in the United States and subject to American law." Roux points out that Microsoft 365, the service for productivity apps on which nearly all organizations rely, runs the Redmond vendor's Entra ID as its identity provider by default. The report says: "The strategic sensitivity of this layer is compounded by a property it shares with no other: IAM dependency is invisible in normal operations and total in failure. An organization discovers its IAM dependency not when costs increase or performance degrades, but when access is denied it represents an actionable 'kill switch.'" There is a European alternative in Keycloak, but even if a European organization chose to self-host the service on a European cloud, it would not be free from dependencies on US companies, which could be compelled to turn off services under US legislation, the report argues. "What does not hold is inter-organizational authentication. As long as partner organisations (ministries, contractors, other public bodies) operate Entra ID as their identity provider, external authentication chains pass through Microsoft infrastructure by default. Under pressure, the first thing that breaks is the ability to collaborate securely with anyone outside the organisation's own perimeter." There is a gap in the market for a European IAM provider as a fully managed service with the SLA guarantees and support model that public sector organizations can buy through existing procurement vehicles. But to counter the problem with inter-organizational authentication, Europe needs not a product, but a standard – "a shared European public sector identity federation framework, mandatory for public administrations, built on open protocols, and interoperable by design," Roux says. The market for cloud infrastructure and services is overwhelmingly dominated by US providers, which often interlock infrastructure and platform services with other technologies. "The lock-in is architectural: organizations have built dependencies on platform-specific services (Lambda functions, BigQuery pipelines, Azure Cognitive integrations) that have no direct drop-in replacement. Infrastructure can be migrated but application architecture cannot be switched without rethinking," the report says. Nonetheless, there are a bunch of European alternatives on the market. France's OVHcloud and Scaleway are among them, as are German providers Hetzner, IONOS, and STACKIT, owned by retail group Schwarz. It may seem impossible for European providers to replace AWS, with its mammoth scale and buying power, but for Roux, replacing AWS is the answer to the wrong question. "No European provider will replicate the full AWS service catalogue. That catalogue was built over twenty years by a company with access to essentially unlimited capital, operating in a continental domestic market with no regulatory friction. The conditions that produced it do not exist in Europe and will not be manufactured by policy. Asking for a European AWS is asking for a different history. The right question is different: for each layer, what does a given organization actually need, and is a credible European alternative available for that specific need?" The report points out that the most serious gaps are in three areas of cloud services. The first is advanced workloads, such as managed AI/ML pipelines and high-concurrency serverless functions. But the constraint only affects a small minority of public sector organizations and is "an irrelevance for the majority." Secondly, there is scale. OVHcloud's total 2024 revenue is approximately 0.9 percent of the figure AWS publishes. But a coordinated policy of investment at both EU and state level can help close that gap. Lastly, Europe struggles to coordinate services between providers that "operate excellent but largely siloed platforms." Roux says this problem might be solvable "through open standards and interoperability frameworks, but it requires deliberate architectural choices that organizations accustomed to single-vendor convenience are not always prepared to make." Although starting from a low base, the European cloud market is set for rapid growth as investment mirrors geopolitical concerns. European spending on sovereign cloud infrastructure services is forecast to more than triple from 2025 to 2027, from $6.9 billion to $23.1 billion, Gartner reported in February, well ahead of any established region. Speaking to The Register, Rene Buest, Gartner senior director analyst, said European businesses are considering local and regional sovereign cloud providers for new cloud workloads, while they work to understand the complexities of migrating established workloads. This is just a glimpse of the problems – and practical measures – the report outlines. Some of the solutions lie at a policy level by driving demand through public procurement and by creating standards. Breton also sees Europe gaining the upper hand through policy, the single market, and by imposing EU rules on data, competition, algorithmic transparency, and taxation. But continuing to create rules that allow for digital sovereignty can be an uphill struggle in the face of US industry lobbying. Roux quotes the NGOs Corporate Europe Observatory and LobbyControl, which studied the EU Transparency Register. They concluded that the tech industry spent a record €151 million on EU lobbying, a figure that has increased by a third in two years. "Big Tech" employs more full-time lobbyists in Brussels than there are Members of the European Parliament. The European Commission is expected to address parts of the issue through a technological sovereignty package set to arrive at the end of May. It is likely to draw on a €234 billion European competitiveness fund, including a €20 billion package for AI infrastructure, supply chain cybersecurity liability provisions for digital infrastructure, and a strong orientation toward sovereign cloud and open source principles. The hope is that through policy and investment, Europe can get CIOs and tech buyers to overcome the barriers to collective action – that is, "each individual sourcing decision is locally rational, while the aggregate outcome (a continued and deepening operational and economic dependency, in the terms defined above) is collectively irrational." Europe may have been slow to address weaknesses in its digital sovereignty, but it has already proved it has the staying power to take on US might. It took 50 years for a consortium of European aerospace businesses from the UK, France, Germany, and Spain to take on dominant aircraft manufacturer Boeing. In 2023, the number of Airbus aircraft in service surpassed Boeing for the first time. Catherine Jestin, executive vice president of digital at Airbus, told The Register last year that the same could be possible in tech. "It's a long game. And if you look at the way China is approaching it, it takes time. It takes political will and the alignment of the industry," she said. Europe doesn't need to dominate the tech market to ensure its digital sovereignty. It only needs viable alternatives to US providers at each layer of the stack, rather than direct replacements for the biggest suppliers. It will take time, but it will never get there unless it makes a start. As Roux shows, there are those willing to provide a map. ®
Categories: Linux fréttir

Pages

Subscribe to www.netserv.is aggregator - Linux fréttir