Linux fréttir
Anthropic Says 'Evil' Portrayals of AI Were Responsible For Claude's Blackmail Attempts
An anonymous reader quotes a report from TechCrunch: Fictional portrayals of artificial intelligence can have a real effect on AI models, according to Anthropic. Last year, the company said that during pre-release tests involving a fictional company, Claude Opus 4 would often try to blackmail engineers to avoid being replaced by another system. Anthropic later published research suggesting that models from other companies had similar issues with "agentic misalignment."
Apparently Anthropic has done more work around that behavior, claiming in a post on X, "We believe the original source of the behavior was internet text that portrays AI as evil and interested in self-preservation." The company went into more detail in a blog post stating that since Claude Haiku 4.5, Anthropic's models "never engage in blackmail [during testing], where previous models would sometimes do so up to 96% of the time."
What accounts for the difference? The company said it found that training on "documents about Claude's constitution and fictional stories about AIs behaving admirably improve alignment." Related, Anthropic said that it found training to be more effective when it includes "the principles underlying aligned behavior" and not just "demonstrations of aligned behavior alone." "Doing both together appears to be the most effective strategy," the company said.
Read more of this story at Slashdot.
Categories: Linux fréttir
Gtk2-NG, next generation of Gtk 2, comes back to life
An effort to revive and reinvigorate the 2002 Gtk2 GUI programming toolkit is growing and gaining interest… as we predicted would happen a few months ago. The gtk2-ng project is reviving and modernizing Gtk2 version 2, which the GNOME developers declared dead back in 2020. We held off on reporting this for a while to see if the idea would gain some support, and it does seem to winning interest and followers. Reviving a 24-year-old toolkit that reached its official end-of-life six years ago is a retrospective sort of undertaking, and as such, it appeals to some modern-but-nostalgic development projects. Development is hosted on the Git instance of the Devuan project, the systemd-free fork of Debian. (Last year, Devuan announced its support of Xlibre, the X.org fork that aims to re-invigorate X11 development.) However, developer Daemonratte announced the fork in a thread on the forums of the Pale Moon browser: GTK2 revival. Pale Moon, as we described in 2021, is a continuing fork of an early version of Firefox. Back in February, when we covered the news that Debian 14 planned to drop Gtk2, we mentioned that this might provide the impetus for a fork. This isn’t the first such fork, and we mentioned then that the Ardour digital audio workstation we last looked at in 2022 maintains its own internal version called YTK. Daemonratte says that they’ve already incorporated some fixes from that, and also from an earlier fork by stefan11111 which has been inactive for a couple of years. They then outline the current goals: Current status: Making it Y2K38-safe Getting rid of all deprecation warnings Patching it for NetBSD and backporting NetBSD-specific patches Testing it on all kinds of hardware Further modernization without breaking ABI Future plans: Implement touch support and smooth scrolling from Ardour’s ytk without breaking ABI, so Ardour can be compiled against gtk2 again Heavily lobby for its adoption in the BSD and systemdfree Linux world Reimplement GtkMozEmbed for UXP, so this wonderful engine can be used in gtk2 projects Gtk originally stood for GIMP Tool Kit: 30 years ago, when the GIMP image editor made its public début, Gtk was the set of tools GIMP’s authors created to make it easier to write GUI apps in C. Six years later, GTK+ 2.0.0 appeared. The new plus symbol in its name represented a new object-oriented design. When Miguel de Icaza announced the GNOME desktop project in 1997, it adopted Gtk instead of the then-semi-commercial Qt that KDE used. Since then, Gtk has been developed along with GNOME. GIMP development is relatively slow: the team finally released version 3.0 a year ago, and it uses Gtk 3. (Last month, it released version 3.2.4.) Since launch, though, the GNOME project has released 39 numbered versions, and in recent decades Gtk has kept pace with GNOME, not GIMP. The last version of Gtk 2 was GTK+ 2.24.0 in 2012. The GNOME developers officially said it was end-of-life with the release of Gtk 4 in 2020. Gtk2-ng is far from the only project to fork and revive an older version of a project which has since been superseded by newer versions from the original team. One of the obvious ones is the MATE desktop, which Argentinian developer Perberos announced in 2011. Saying that, though, Daemonratte stated: "The ultimate vision of this fork is to keep gtk2 alive for software using it right now and to revive gtk2 versions of […] Gnome2 […]. Yes, I don’t have to do this alone and no, Mate is not an option, because they use gtk3 now." It is very much not alone. We have been covering releases of KDE 3 fork the Trinity desktop environment since version 14.0.11 in 2021. This vulture used KDE 1.x back when it was the state of the Linux art, and for us, KDE 3.x was already too big and complicated. For the KDE project’s 20th anniversary in 2016, Brazilian developer Helio Chissini de Castro modernized KDE 1 so that it would build and run on Fedora 25. We didn’t realize this had become an ongoing effort, but it has. From later in the Gtk-ng thread, we learned about MiDesktop, a continuing project based on Osiris, a modernized Qt 2. ®
Categories: Linux fréttir
BWH Hotels guests warned after reservation data checks out with cybercrooks
BWH Hotels is informing customers about a third-party data breach that gave cybercriminals access to six months' worth of data. The notification email stated that BWH Hotels, which owns the WorldHotels, Best Western Hotels & Resorts, and Sure Hotels brands, identified the intrusion on April 22, but the affected data goes back to October 14, 2025. BWH Hotels CTO Bill Ryan, who penned the notification email, said names, email addresses, telephone numbers, and/or home addresses belonging to "certain guests" were accessed by an unauthorized third party. The intruders also accessed reservation details, such as reservation numbers, dates of stay, and any special requests. It confirmed that the attack targeted one of its "web applications that houses certain guest reservation data." No payment or bank details were involved. The Register asked BWH Hotels whether the intrusion began in October and went undetected until April, or whether a later breach exposed data dating back to October. We also asked if this was related to information we were sent in March about BWH Hotel customer booking data being stolen and used for phishing campaigns. At the time, the company neither confirmed nor denied the information seen by The Register. BWH Hotels did not immediately respond to our request for comment on Monday. "Upon discovering the incident, we immediately took the application offline and revoked the unauthorized access," said Ryan. "We have engaged leading external cybersecurity experts to support our incident response efforts and to assist with the further strengthening of existing safeguards." "We advise guests to be extra vigilant when viewing any unexpected or suspicious communications about hotel stays. If you receive a suspicious communication such as an unexpected email, text, WhatsApp message, or telephone call that asks for payment, codes, logins, or 'verification,' even if they reference a BWH Hotels property or an upcoming reservation, do not engage. Navigate to sites directly rather than clicking links." ®
Categories: Linux fréttir
Feature freeze for Python 3.15 as first beta released
The Python team has released the first beta of version 3.15, with new features including a stable application binary interface (ABI) for free-threaded CPython, lazy imports to speed startup time, a new zero-overhead sampling profiler, use of UTF-8 text encoding by default, and a faster just-in-time (JIT) compiler. Python's development cycle bars new features after the first beta release. There is typically a new feature release in October each year, with version 3.15 currently scheduled for October 1. The option to remove the global interpreter lock (GIL), available in Python 3.14, was the biggest change to Python for years, enabling efficient concurrency on multi-core CPUs. The new stable ABI means that C extensions can now be compiled for multiple minor versions of free-threaded builds, though the team warns that doing so means only a subset of the full CPython API is available. The existing stable ABI remains available, and it is possible to compile for both. Extension maintainers will benefit, since building new versions for every minor Python release is a burden. Explicit lazy imports can improve startup time for Python applications by deferring module loading until it is first accessed. Otherwise, an imported module is loaded and compiled to bytecode immediately - though developers could use workarounds at the expense of code readability. The solution for this is a new keyword: lazy import json A new sampling profiler called Tachyon works by capturing stack traces from running processes, instead of instrumenting function calls. According to the docs, the approach "provides virtually zero overhead while achieving sampling rates of up to 1,000,000 Hz" and can be used to debug performance issues in production. Text encoding in Python 3.15 is now UTF-8 by default, though explicit encoding is still recommended for best compatibility. CPython is the reference implementation of Python, and improving its performance has long been a focus. An experimental JIT compiler was introduced in version 3.14, though not recommended for production use - and could make code run more slowly. In 3.15, the JIT compiler is much improved, and the team now reports an 8-9 percent mean performance improvement over the CPython interpreter on x86-64 Linux, and 12-13 percent on Apple silicon macOS, though some code may still run up to 15 percent slower. These figures may change before the final release. In contrast, the incremental garbage collector released in 3.14 has been reverted, following reports of memory leaks. This aimed to improve performance by reclaiming memory less frequently. It was removed in Python 3.14.5 and the core team stated: "If we want to reintroduce the incremental GC for 3.16, it can go through the regular PEP process and be more thoroughly evaluated." The full list of what is new in 3.15 is documented here.®
Categories: Linux fréttir
Google says criminals used AI-built zero-day in planned mass hack spree
Google says crooks already have AI cooking up zero-days, and claims one nearly escaped into the wild before the company stopped it. In a report shared with The Register ahead of publication on Monday, Google’s Threat Intelligence Group said that it has identified what it believes is the first real-world case of cyber-baddies using AI to discover and weaponize a zero-day vulnerability in a planned mass-exploitation campaign. The bug, a two-factor authentication bypass in a popular open source web-based administration platform, was reportedly developed by criminals working together on a large-scale intrusion operation. GTIG said that the attackers appear to have used an AI model to both identify the flaw and help turn it into a usable exploit. Google worked with the unnamed vendor to quietly patch the issue before the campaign could properly kick off, which it believes may have disrupted the operation before it gained traction. The company insists that neither Gemini nor Anthropic’s Mythos was involved, but said that the exploit itself looked suspiciously machine-made. According to the report, the Python script included what Google described as "educational docstrings," a hallucinated CVSS score, and a polished textbook coding structure that looked heavily influenced by LLM training data. Google said that the issue stemmed from developers hard-coding a trust exception into the authentication flow, creating a hole that attackers could exploit to sidestep 2FA checks. According to the firm, those higher-level logic mistakes are exactly the kind of thing modern AI models are starting to get surprisingly good at finding. "While fuzzers and static analysis tools are optimized to detect sinks and crashes, frontier LLMs excel at identifying these types of high-level flaws and hardcoded static anomalies," the report said. John Hultquist, chief analyst at Google Threat Intelligence Group, said anyone still treating AI-assisted vulnerability discovery as a future problem is already behind. "There’s a misconception that the AI vulnerability race is imminent. The reality is that it’s already begun. For every zero-day we can trace back to AI, there are probably many more out there," Hultquist said. "Threat actors are using AI to boost the speed, scale, and sophistication of their attacks. It enables them to test their operations, persist against targets, build better malware, and make many other improvements. State actors are taking advantage of this technology but the criminal threat shouldn’t be underestimated, especially given their history of broad, aggressive attacks." Google’s report suggests that the zero-day case is part of something much bigger. GTIG said North Korean crew APT45 had been using AI to churn through thousands of exploit checks and bulk out its toolkit, while Chinese state-linked operators were experimenting with AI systems for vulnerability hunting and automated probing of targets. Google also described malware families padded out with AI-generated junk code designed to confuse analysts, Android backdoors using Gemini APIs to autonomously navigate infected devices, and Russian influence operations stitching fabricated AI-generated audio into legitimate news footage. The awkward bit for everyone else is that this still appears to be the clumsy early phase. Google said mistakes in the exploit’s implementation probably interfered with the criminals’ plans this time around, but that may not stay true for long. ®
Categories: Linux fréttir
SoftBank bets on battery building to back bit barns
SoftBank is getting into the datacenter battery business and plans to start manufacturing them on the scale of gigawatt-hours per year of capacity to support the power needs of AI infrastructure, including its own. The Japan-based tech investment biz says it aims to deploy the battery systems it is developing at its own large-scale AI server farms initially, but plans to make them more widely available in future. It hopes to begin mass production in financial year 2027, and expects the operation to generate revenue of ¥100 billion (over $600 million) per year by 2030. SoftBank is working with two South Korean firms that have a track record in advanced battery-related technologies. One is Cosmos Lab, developer of zinc-halogen batteries that use pure water as an electrolyte, making them non-flammable, and the other is DeltaX, which designs and manufactures battery-based energy storage systems (BESS). Reg readers may recall that SoftBank last year bought the rights to a former Sharp LCD panel factory in Sakai City, Osaka prefecture in Japan, and said it planned to convert it into a datacenter to operate AI agents developed jointly with ChatGPT creator OpenAI. The site will now become an industrial cluster, home to its battery manufacturing facility as well. SoftBank referred to it as a core hub to establish its AX Factory (a center for datacenter operations and AI infrastructure hardware manufacturing), and GX Factory (serving as a manufacturing facility for next-gen batteries, solar panels, and related products). One detail missing is how much cash the investment biz is pouring into this venture. We asked how much the project is costing to get off the ground, but a SoftBank spokesperson told us it was not able to comment. SoftBank plans to start by deploying the battery systems produced at its GX Factory in its own server halls, but will then provide them for grid applications in Japan, plus factories and other industrial uses. It hopes to take the technology into global markets over the medium term. In presentation slides seen by The Register, the firm says BESS for commercial and industrial use will have a capacity of 140 kWh to 560 kWh, while those for large-scale or grid-scale use will come in at 2,240 kWh to 5,380 kWH. According to SoftBank, DeltaX has developed BESS capable of energy densities exceeding 5 MWh in a standard commercial container format (a 20-foot shipping container). The way DeltaX packs together and connects the battery cells in its BESS maximizes their performance, Softbank claims, and by applying these technologies to next-generation battery cells (presumably referring to those of Cosmos Lab), further improvements in energy storage can be achieved. Those battery cells, which SoftBank calls Innovative batteries, use a halogen-based material for the cathode and zinc for the anode, which it says offers charge-discharge characteristics with minimal energy loss and energy efficiency comparable to existing lithium-ion batteries. As they use pure water as the electrolyte, SoftBank claims these batteries are inherently safer and won't catch fire, unlike lithium-ion batteries, which have a well-documented tendency to do exactly that. SoftBank has its finger in a number of pies when it comes to AI projects. The firm was aiming to pump $22.5 billion into LLM developer OpenAI before the end of 2025, and more recently announced plans for a massive 10 GW datacenter campus on US Department of Energy (DoE) land in Ohio. The company is also majority shareholder of chip designer Arm, which recently revealed its first Arm-branded datacenter processor targeting AI, and owns Ampere Computing, which makes Arm-based server chips. ®
Categories: Linux fréttir
Water company's leaky security earns near-£1M fine
The UK's data protection watchdog has fined South Staffordshire Water's parent company nearly £1 million over security failings exposed by the Cl0p ransomware attack in 2022. Issuing the fine of £963,900 ($1.3 million), the Information Commissioner's Office (ICO) said the attack exposed "significant failures in the company's approach to data security." The attack, claimed by Cl0p, was detected in July 2022 after engineers responded to performance issues, but a thorough postmortem revealed the initial intrusion occurred almost two years earlier, in September 2020. Among the key failures that led to the attack, and the nearly two-year delay in detecting it, were: Limited controls, which allowed the attacker to escalate their privileges to admin after gaining an initial foothold on the network Inadequate monitoring and logging. The ICO noted that only 5 percent of South Staffordshire's IT environment was being monitored Running unsupported software, including Windows Server 2003 Poor vulnerability management. Investigations showed critical systems were unpatched against known vulnerabilities, and the company failed to regularly run internal or external security scans The ICO said 633,887 people were affected by the attack and the resulting leak of company files. For customers, this included personally identifiable information, usernames and passwords used to access its online services, and bank account numbers and sort codes. For a limited number of customers on the utility company's Priority Services Register, the stolen information could have led to their disabilities being inferred. Cl0p also pilfered HR information, including employees' National Insurance numbers. The trove of company data was later leaked online in a file exceeding 4 TB. At the time of the attack, South Staffordshire handled the data of some 1.85 million individuals. Most of these were either current or former customers, but several thousand staffers' details were also retained. "Customers do not have the choice over which water company serves them – they are required to share their personal information and place their trust in that provider," said Ian Hulme, interim executive director for regulatory supervision at the ICO. "It is therefore essential that water companies honor that trust by taking their data protection responsibilities seriously." "The steps that South Staffordshire failed to take are established, widely understood and effective controls to protect computer networks. The ICO expects all organizations – and particularly those handling large volumes of personal information as part of critical national infrastructure – to have these in place." "Waiting for performance issues or a ransom note to discover a breach is not acceptable. Proactive security is a legal requirement, not an optional extra." The ICO announced its intent to fine South Staffordshire in December 2025. The regulator said after reviewing the company's representations, which included agreement with its findings and an early admission of wrongdoing, it reduced the fine by 40 percent. "We accept the Information Commissioner's Office's decision relating to the cyberattack our Group experienced in 2022, and are sorry for the worry and concern it caused for customers and employees," said Charley Maher, group CEO at South Staffordshire Plc, in a statement provided to The Register. "We took immediate action to contain the incident, support those impacted, and reduce the risk of recurrence." "We have invested significantly to further strengthen our cybersecurity resilience, governance, and monitoring, and we continue to enhance our capabilities as the threat landscape evolves. Protecting customer and employee information is a responsibility we take extremely seriously, and we remain focused on learning from this incident and maintaining strong safeguards across the Group." ®
Categories: Linux fréttir
Checkmarx tackles another TeamPCP intrusion as Jenkins plugin sabotaged
Checkmarx’s software engineers are still working to remove a malicious version of the code security outfit's Jenkins plugin after detecting an unauthorized upload over the weekend. It updated customers on Saturday, May 9, after discovering a version of its AST Scanner, which is used for security scans in Jenkins CI pipelines, was made available via the Jenkins Marketplace. “We are aware that a modified version of the Checkmarx Jenkins AST plugin was published to the Jenkins Marketplace,” it said in a statement. “We are in the process of publishing a new version of this plug-in.” Versions published as of May 9, 2026, should not be trusted, it added, before urging all users to check they’re running the correct release (2.0.13-829.vc72453fa_1c16) published on December 17, 2025. Installed by several hundred controllers, the plugin remains available at the time of writing, and appears as the most recently available version, although pull requests actioned on Monday morning suggest this will soon be pulled down. “What makes this particularly dangerous for Jenkins users is the trust model at play,” said SOCRadar in its coverage. “The Checkmarx Jenkins plugin is a tool people install specifically to improve the security of their pipelines. “A backdoored version doesn’t just compromise one project; it rides trusted infrastructure into every build pipeline it touches, with access to source code, environment variables, tokens, and whatever secrets the runner can see.” Security engineer Adnan Khan spotted the compromise quickly over the weekend. The crew behind the early supply chain attack affecting Checkmarx in April, TeamPCP, defaced the company’s GitHub and published six packages, each with a description alluding to the Shai-Hulud wormable malware. These packages no longer appear on Checkmarx’s GitHub, but TeamPCP made multiple changes to the AST plugins page, renaming it to “Checkmarx-Fully-Hacked-by-TeamPCP-and-Their-Customers-Should-Cancel-Now,” and altering the description to claim CheckMarx failed to rotate its secrets. The latest infiltration of Checkmarx’s internals marks the third time TeamPCP has compromised the company’s packages in as many months. As previously seen in The Register, the crooks successfully targeted Checkmarx’s AST plugin for GitHub Actions and its KICS static analysis tool back in March, deploying credential-stealing malware. SOCRadar said the latest TeamPCP compromise of the Jenkins plugin suggests that either TeamPCP was telling the truth about Checkmarx’s secrets rotation, or its members took advantage of an additional persistence mechanism that the security vendor failed to notice during its response to the March intrusion. ®
Categories: Linux fréttir
NASA's bid to save Swift from fiery death passes another hurdle
A rescue mission for NASA's Neil Gehrels Swift Observatory has taken another step forward following the completion of environmental tests at the agency's Goddard Space Flight Center. The purpose of the tests was to assess how the LINK robotic servicing spacecraft, supplied by Katalyst Space Technologies, would withstand the forces of launch and the extremes of the orbital environment. The mission is ambitious and fast-paced. It was only in August 2025 that NASA asked US industry for ideas on rescuing the observatory, whose orbit is decaying faster than expected. Katalyst was awarded the contract and has been working against the clock to launch its servicing spacecraft before Swift reaches the point of no return. In February 2026, NASA ended most science operations aboard Swift to keep the spacecraft in orbit long enough for the rescue mission. At the time, June 2026 was Katalyst's expected launch date and, thanks to the successful completion of testing, the mission remains on track. The next step is for Northrop Grumman to integrate LINK into its Pegasus rocket in early June, with launch planned from the last airworthy L-1011 TriStar (dubbed Stargazer) later that month. The LINK spacecraft has undergone vibration testing to simulate a Pegasus launch and thermal-vacuum testing in Goddard's Space Environment Simulator, where it experienced space-like hot and cold temperature extremes. The team also test-fired the spacecraft's three xenon-powered ion thrusters and deployed one of its robotic arms. Kieran Wilson, LINK's principal investigator at Katalyst, said: "We're in an unusual situation where the schedule dictates how much risk we’re willing to accept, rather than the other way around. "The clock is ticking on Swift's descent, so we have to find a balance between testing and problem solving that gives the mission the best chance of success." After paying tribute to the speed at which Katalyst was moving, Swift mission director John Van Eepoel said: "Swift will likely re-enter the atmosphere sometime later this year if we don't attempt to lift it to a higher altitude." In this instance, the Swift observatory has nothing to lose and everything to gain from the reboost mission. The spacecraft is more than 20 years into a two-year task to study gamma-ray bursts. If it weren't for its decaying orbit (and the Trump administration's effort to terminate it - the mission was on the chopping block in the FY2026 budget proposal - it could continue observations for years to come. ®
Categories: Linux fréttir
Linux Kernel Starts Retiring Support for AMD's 30-Year-Old K5 CPUs
Linux 7.1 started phasing out support for Intel's 37-year-old i486 processor. Linux 7.2 removed drivers for the old AMD Elan 32-bit systems on a chip.
And now some i586 and i686 class processors are being removed, reports Phoronix:
Supporting those vintage GPUs without the Time Stamp Counter "TSC" instruction are becoming a burden... TSC-capable Intel Pentium processors and the likes will still be supported with this just being for TSC-less i586/i686 CPUs. Among the CPUs impacted by this latest change is the AMD K5 as well as various Cyrix processor models. The K5 was AMD's first entirely in-house designed processor that was first introduced in 1996 to counter the Intel Pentium CPU.
TSC "support can now be assumed as a boot requirement for modern Linux," the article points out, which will allow the removal of various non-TSC code paths from the Linux kernel's x86 code.
Tom's Hardware remembers the K5 "wasn't a very popular processor as it arrived late, then offered lackluster performance in the competitive environment it joined."
Launch SKUs in 1996 were limited to clocks from 75 MHz to 133 MHz, and, due to being late, Intel's Pentium line was already faster. AMD still managed to get an edge on the Cyrix 6x86, though.
Read more of this story at Slashdot.
Categories: Linux fréttir
Linux kernel maintainers pitch emergency killswitch after CopyFail and Dirty Frag chaos
Linux kernel maintainers are considering giving admins a giant red emergency button to smash the next time another nasty vulnerability drops before patches are ready. The proposed feature, named "Killswitch," would let admins temporarily disable specific vulnerable kernel functions at runtime instead of sitting around waiting for fixes. The so-called patch was submitted by Linux stable kernel co-maintainer and Nvidia engineer Sasha Levin after a bruising couple of weeks for Linux security. The proposal basically gives admins a way to pull the plug on vulnerable kernel functionality. If exploit code starts spreading before patches arrive, the targeted function can be disabled so calls to it immediately fail instead of reaching the vulnerable code. "When a (security) issue goes public, fleets stay exposed until a patched kernel is built, distributed, and rebooted into," Levin wrote. "For many such issues the simplest mitigation is to stop calling the buggy function. Killswitch provides that." The past couple of weeks have not exactly been great advertising for the traditional "wait for patches" approach. First we saw the disclosure of CopyFail, a Linux local privilege escalation bug that quickly moved from disclosure to active exploitation. Days later, Dirty Frag emerged: another Linux privilege escalation flaw with public exploit code and no official fixes, after coordinated disclosure efforts fell apart before patches were ready. As Levin's proposal itself puts it, organizations are often left exposed "until a patched kernel is built, distributed, and rebooted into." Killswitch aims to fill that gap. Killswitch would work through the kernel's security interface and is mainly intended for subsystems that systems can survive without for a while. In practical terms, Levin's argument is that temporarily losing some networking or crypto functionality is preferable to leaving known vulnerable code exposed on production systems. However, the feature would not fix vulnerable code or replace it with safe code. It just slams the door shut on the dangerous bit until administrators can properly update their kernels. Naturally, handing sysadmins the ability to selectively shoot pieces of the kernel in the head has already sparked debate among developers over stability, potential for abuse, and whether people can be trusted not to accidentally saw off important limbs in production. Still, after CopyFail and Dirty Frag, the kernel community increasingly seems to be arriving at the conclusion that running broken functionality may now be preferable to running weaponized functionality. ®
Categories: Linux fréttir
Classic Outlook's Quick Steps trip over Microsoft bug
If you're using Quick Steps in Microsoft Outlook and wondering why they're grayed out, a bug introduced in version 2512 is the culprit. Classic Outlook is approaching the twilight years of its prodigiously long life, but users can still fall victim to productivity-killing bugs – in this case, a problem with Quick Steps. Quick Steps automates common or repetitive tasks in Outlook. Always have to move a bunch of messages to a specific folder? Quick Steps is your friend. Pin an email and mark it as unread? Again, the actions can be lined up in Quick Steps and executed with a single click or a keyboard shortcut. Until Microsoft breaks it. In a support article, Microsoft has confirmed that in some situations, Quick Steps in classic Outlook can appear grayed out. The workaround (if rolling back or switching clients isn't an option) is to use a keyboard shortcut. "The shortcut will work even if the Quick Step is grayed out in the user interface," Microsoft wrote. The problem is that if a Quick Step contains actions that "can't be fulfilled," it's grayed out. Microsoft's own the example states: "A Quick Step that moves a message to a folder and clears categories will be grayed out in messages where there are no categories applied." "This is known to happen with Quick Steps with Flags and Categories actions such as 'Clear flags on message' or 'Clear categories'." Classic Outlook has suffered several glitches of late. Microsoft admitted in April that it could occasionally chow down on system resources for no obvious reason. Then there was its tendency to explode when opening too many emails. Microsoft has been clear that Classic Outlook's days are numbered. Outlook 2024 is due to drop out of mainstream support in 2029. However, there remains much that Classic Outlook does which New Outlook doesn't, such as COM support. And, when Microsoft hasn’t broken them, Quick Steps. ®
Categories: Linux fréttir
Europe wants out from under US tech – but first it has to find the exits
In late December, US Secretary of State Marco Rubio sanctioned former European Commission tech chief Thierry Breton for his role in leading "organized efforts to coerce American platforms to censor, demonetize, and suppress American viewpoints they oppose." The architect of the EU's Digital Services Act (DSA) – a pet hate of the Trump administration – has yet to be deterred. Last month, he joined a chorus of calls for Europe to end its reliance on dominant US tech companies. "The time for an apologetic Europe is over," the former Atos CEO said in a rallying cry that points out we now live in a world "where digital sovereignty has become one of the central arenas of power politics." But what to do about it? US companies hold overwhelming positions in markets including cloud infrastructure and personal productivity tools, to say the least. Breton says Europe has a "constellation of [tech] players that, together, form a considerable base," but offers little explanation of how it might extract itself from the incumbent providers and what the new world might look like. One of his compatriots has, though. Nicolas Roux, systems engineer at French aerospace research lab ONERA, has put together a comprehensive analysis in an attempt to understand which systems might fail first under the kind of pressure the US has already exercised on European institutions and individuals. It also looks at how long they would take to recover and how Europe can reduce its exposure, and which levers – organizational, sectoral, or political – it should pull to ensure better digital sovereignty. The 137-page report is designed for Europe's decision-makers on tech and policy. The details are too numerous to summarize, but it offers a glimpse of some worst-case scenarios as well as cause for optimism. As the report points out, a sense of urgency has gripped European institutions following US sanctions on International Criminal Court (ICC) prosecutor Karim Khan, which led to his Microsoft services being suspended. Microsoft denied responsibility, saying it was the ICC's decision. The Dutch press later reported that the decision was made under duress after Microsoft pointed out that its obligations under the sanctions meant it would have to cut off service to the entire organization unless the ICC removed Khan's access. In March, Henna Virkkunen, Executive Vice-President of the European Commission with responsibility for technological sovereignty, said that Europe's dependence on American technology had become a security concern visible beyond specialist circles. There are so many layers of technology in which the US dominates, with so many interdependencies, any effective move toward digital sovereignty should be based on an understanding of which are the most vulnerable and which are hardest to replace. Roux zeros in on Identity and Access Management (IAM). The US dominates enterprise deployments with few exceptions. "Microsoft, Ping Identity, and IBM as the market's leading operators, with Okta, Oracle Identity Governance, and CyberArk accounting for the majority of remaining enterprise contracts," the report says. "No European vendor appears in any tier of the competitive landscape. For European public administrations, this means that the layer of infrastructure responsible for authenticating every user and authorising every access decision is, in most cases, operated by a vendor incorporated in the United States and subject to American law." Roux points out that Microsoft 365, the service for productivity apps on which nearly all organizations rely, runs the Redmond vendor's Entra ID as its identity provider by default. The report says: "The strategic sensitivity of this layer is compounded by a property it shares with no other: IAM dependency is invisible in normal operations and total in failure. An organization discovers its IAM dependency not when costs increase or performance degrades, but when access is denied it represents an actionable 'kill switch.'" There is a European alternative in Keycloak, but even if a European organization chose to self-host the service on a European cloud, it would not be free from dependencies on US companies, which could be compelled to turn off services under US legislation, the report argues. "What does not hold is inter-organizational authentication. As long as partner organisations (ministries, contractors, other public bodies) operate Entra ID as their identity provider, external authentication chains pass through Microsoft infrastructure by default. Under pressure, the first thing that breaks is the ability to collaborate securely with anyone outside the organisation's own perimeter." There is a gap in the market for a European IAM provider as a fully managed service with the SLA guarantees and support model that public sector organizations can buy through existing procurement vehicles. But to counter the problem with inter-organizational authentication, Europe needs not a product, but a standard – "a shared European public sector identity federation framework, mandatory for public administrations, built on open protocols, and interoperable by design," Roux says. The market for cloud infrastructure and services is overwhelmingly dominated by US providers, which often interlock infrastructure and platform services with other technologies. "The lock-in is architectural: organizations have built dependencies on platform-specific services (Lambda functions, BigQuery pipelines, Azure Cognitive integrations) that have no direct drop-in replacement. Infrastructure can be migrated but application architecture cannot be switched without rethinking," the report says. Nonetheless, there are a bunch of European alternatives on the market. France's OVHcloud and Scaleway are among them, as are German providers Hetzner, IONOS, and STACKIT, owned by retail group Schwarz. It may seem impossible for European providers to replace AWS, with its mammoth scale and buying power, but for Roux, replacing AWS is the answer to the wrong question. "No European provider will replicate the full AWS service catalogue. That catalogue was built over twenty years by a company with access to essentially unlimited capital, operating in a continental domestic market with no regulatory friction. The conditions that produced it do not exist in Europe and will not be manufactured by policy. Asking for a European AWS is asking for a different history. The right question is different: for each layer, what does a given organization actually need, and is a credible European alternative available for that specific need?" The report points out that the most serious gaps are in three areas of cloud services. The first is advanced workloads, such as managed AI/ML pipelines and high-concurrency serverless functions. But the constraint only affects a small minority of public sector organizations and is "an irrelevance for the majority." Secondly, there is scale. OVHcloud's total 2024 revenue is approximately 0.9 percent of the figure AWS publishes. But a coordinated policy of investment at both EU and state level can help close that gap. Lastly, Europe struggles to coordinate services between providers that "operate excellent but largely siloed platforms." Roux says this problem might be solvable "through open standards and interoperability frameworks, but it requires deliberate architectural choices that organizations accustomed to single-vendor convenience are not always prepared to make." Although starting from a low base, the European cloud market is set for rapid growth as investment mirrors geopolitical concerns. European spending on sovereign cloud infrastructure services is forecast to more than triple from 2025 to 2027, from $6.9 billion to $23.1 billion, Gartner reported in February, well ahead of any established region. Speaking to The Register, Rene Buest, Gartner senior director analyst, said European businesses are considering local and regional sovereign cloud providers for new cloud workloads, while they work to understand the complexities of migrating established workloads. This is just a glimpse of the problems – and practical measures – the report outlines. Some of the solutions lie at a policy level by driving demand through public procurement and by creating standards. Breton also sees Europe gaining the upper hand through policy, the single market, and by imposing EU rules on data, competition, algorithmic transparency, and taxation. But continuing to create rules that allow for digital sovereignty can be an uphill struggle in the face of US industry lobbying. Roux quotes the NGOs Corporate Europe Observatory and LobbyControl, which studied the EU Transparency Register. They concluded that the tech industry spent a record €151 million on EU lobbying, a figure that has increased by a third in two years. "Big Tech" employs more full-time lobbyists in Brussels than there are Members of the European Parliament. The European Commission is expected to address parts of the issue through a technological sovereignty package set to arrive at the end of May. It is likely to draw on a €234 billion European competitiveness fund, including a €20 billion package for AI infrastructure, supply chain cybersecurity liability provisions for digital infrastructure, and a strong orientation toward sovereign cloud and open source principles. The hope is that through policy and investment, Europe can get CIOs and tech buyers to overcome the barriers to collective action – that is, "each individual sourcing decision is locally rational, while the aggregate outcome (a continued and deepening operational and economic dependency, in the terms defined above) is collectively irrational." Europe may have been slow to address weaknesses in its digital sovereignty, but it has already proved it has the staying power to take on US might. It took 50 years for a consortium of European aerospace businesses from the UK, France, Germany, and Spain to take on dominant aircraft manufacturer Boeing. In 2023, the number of Airbus aircraft in service surpassed Boeing for the first time. Catherine Jestin, executive vice president of digital at Airbus, told The Register last year that the same could be possible in tech. "It's a long game. And if you look at the way China is approaching it, it takes time. It takes political will and the alignment of the industry," she said. Europe doesn't need to dominate the tech market to ensure its digital sovereignty. It only needs viable alternatives to US providers at each layer of the stack, rather than direct replacements for the biggest suppliers. It will take time, but it will never get there unless it makes a start. As Roux shows, there are those willing to provide a map. ®
Categories: Linux fréttir
The latest innovation in UK public transport: Schrödinger's trains
BORK!BORK!BORK! Guessing games are all the rage, and commuters trying to get home from London Victoria station found themselves flipping a virtual coin to guess the location of their train after Inspector Bork paid a visit to the station's platform board. London Victoria Station is a major transport hub for England's capital city. Trains from the station serve much of the southern part of the country and farther afield. Built around 1860, the station has had various platform display systems over the years. For a long time, the board was of the Solari split-flap type, replete with a delightful clickety-clack sound as destination information was updated. Today's board is a huge digital display which, while undoubtedly more flexible and capable of displaying far more information than the split-flap affair of old, is also susceptible to a visit from the bork fairy. Where the split-flap board might occasionally jam, the digital board could suddenly go inexplicably dark. As happened on May 7, 2026, when Victoria train station was at its busiest. Where platforms, stations, and times were usually listed, there was instead a network error followed by a clock. As such, while the location of trains might have been a mystery for commuters, at least they knew the time. Some travelers, likely tourists, looked confused. Others, probably regular commuters, continued their muscle-memory-propelled trudge toward the platforms. And in the back office? We suspect some frantic clicking of mouse buttons and hammering of keys while a harassed operator tried to work out what had happened to the data. For many passengers, the borked board was symptomatic of how their day had gone. Problems with the trains in the region had made national news, so an apparent admission that nothing was going anywhere was likely the icing on a particularly unpleasant cake. Still, at least the station is not short of places where adult beverages can be bought and consumed. Sometimes that's the best way to deal with a journey on the UK's public transport system, bork or no bork. ®
Categories: Linux fréttir
Taiwan's train cyber-trauma reveals a global system that’s coming off the tracks
OPINION There are three little words to make the heart beat faster in anyone who knows what they mean: critical infrastructure resilience. If you run that infrastructure or a country dependent on it, you need energy, communication and transport to be impregnable to cyber attacks. This is doubly so if that country is five minutes by incoming missile from an implacable hyper-competent enemy sworn to invade you. One that is building and equipping its military as fast as it can with this one thing in mind. One with the most invasive and brazen state hacking machinery on the planet. Thus it was a very bad day indeed when Taiwan’s entire bullet train system was disabled for nearly an hour by an unknown attacker. It got even worse when that attacker turned out not to be the implacable and hyper-resourced state actor over the Taiwan Strait, but a university student with a yen for radio and some kit he bought online. On the one hand, it’s good to see the good repair of the grand tradition of young hackers causing havoc from their bedrooms. On the other, WTRF? The information released by the Taiwanese authorities is scant on details, but enough to be pretty sure what actually happened. It’s bad news not just for Taiwan but for more than 100 countries that also use the TETRA two-way radio standard involved, often for emergency services. In many cases, it was the default replacement for unencrypted FM two-way radios, adding encryption, flexibility and network security. These were state of the art when TETRA was developed in the 1980s and 1990s — and work as well in 2026 as you might expect. Oops. There have been upgrades and, especially after the 2023 vulnerability disclosures, an accelerated program of making things better. A lot of the installed base globally is old, lacks over-the-air updates for security, and in any case spending money on new radios is normally at the bottom of the list for any state or public service organizations. Things have to get really bad first. Perhaps they just have. (North America is the only region where TETRA is uncommon, as it isn’t approved for public service use. This was either acute foresight or the fact that the TE in TETRA, now officially TErrestrial, used to stand for Trans-Europe. The American system, P.25, has never, however, been renamed Freedom Frequencies. Now on with the show) The network vulnerabilities are one side of the story. Our doughty hacker is the other. Reportedly, he didn’t have any TETRA hardware, but a laptop connected to a radio and an ‘SDR filter’. The latter makes little sense, it is far more likely that he had a software defined radio (SDR) called a HackRF. There are plenty of other devices that could have been used, but the HackRF is the weapon of choice for the gung-ho radio nut. SDR is a technique that has completely changed the rules of how to radio. All radios before it had to be entirely or mostly analog, with precision hardware dedicated to whatever job each radio had to do. This hardware could also be looked at as an analog computer, as it can be modelled as a set of mathematical transformations on the received signal. Analog computers have their place, just not in the 21st century. SDR is radio as digital computer. At heart, it has three components: an analog to digital converter to turn the incoming signal to a stream of numbers, very fast processing to do the radio math, and a digital to analogue converter to play the results. What you get is triply terrific. Digital processing is perfect, analog processing adds noise and distortion. Nothing is fixed, everything can be re-engineered with new code. And it can be hog-whimperingly cheap. HackRF is all those things and more. It can be configured as a portable touch-screen device. It transmits and receives from DC to daylight. You can pick one up for less than the price of a mid-range mobile. It is open source. It works with all manner of SDR creation tools, utilities and radio packages. There are infinite legitimate uses. Most excitingly, you can download apps for it that do everything, most especially the kind of thing that will introduce you with surprisingly rapidity to a wide range of new friends with no sense of humor and love letters that look suspiciously like arrest warrants. Think of it as speed dating but with more guns and less no thank yous, GPS spoofing, aviation and marine location transponders, satellite comms, data eavesdropping and injection - take your pick. You’ll need it to unlock the cell door. It is the data detection and injection that seems to have been the downfall of all concerned. A handset had its transmission decoded, and the result was retransmitted into the system as if it were that original radio. Whether the decoded data already had the General Alarm set, or whether the data had to be modified before retransmission, is not yet known. Doesn’t matter. It’s called a replay attack, and it has and is mostly used in stand-alone devices called code grabbers to unlock and steal expensive cars with wireless keys. Some countries, including Canada and the UK, have banned code grabbers, but this has failed on two counts. Code grabbers are small gadgets that can be bought online from China, and good luck policing that. Also, thieves are notably indifferent to laws. That notwithstanding, the UK is thinking of extending the ban to other classes of naughty wireless, and would doubtless like to do the same with HackRF, at least as of last week. Of course, they can’t be banned. SDRs can’t be banned as a class, especially open source ones made out of standard chips and open code. They are general purpose computers, albeit with specialisms. It doesn’t matter if you’re dismayed or delighted that things like HackRF exist, that genie is out of the bottle. What is truly dismaying is that replay attacks are a solved problem, trivially so. Choose a big keyspace, randomize and never repeat keys. That one is on lazy car makers and, apparently, the world of TETRA. Fixing that class of lazy, outdated security vulnerability will be very expensive. Embedded systems are like that, especially old ones. Not fixing this will be a gamble with infinite downside, in a world where electronic warfare systems that used to cost hundreds of millions now pour out of Ali Express for a few bucks. HackRF is to Tetra like Crocodile Dundee’s knife is to the mugger’s. Critical infrastructure resilience. Just three little words, but if you say them you better mean it. And it won’t be cheap. ®
Categories: Linux fréttir
Ford's Electrified Vehicle Sales Dropped 31% in April From One Year Ago
Ford's sales of electrified vehicles — including hybrids and all-electric models — dropped 31% from April 2025, reports Electrek. "Hybrid sales fell 32% to 15,758 vehicles, while EV sales continued to crash with just 3,655 all-electric models sold last month, 25% fewer than in the year prior."
After discontinuing the F-150 Lightning in December, sales of the electric pickup have been in free fall. Ford sold just 884 Lightnings last month, 49% less than it did last April. The Mustang Mach-E isn't doing much better. Sales fell another 9% year over year in April, to just 2,670 models last month. Through the first four months of 2026, Ford's EV sales have fallen 61% from last year, with F-150 Lightning and Mustang Mach-E sales down 67% and 50%, respectively. Ford has sold just over 10,500 electric vehicles in total so far this year... For comparison, Toyota sold just over 10,000 bZ models in the first quarter alone. That's more than Ford's total EV sales in Q1.
April was Ford's fourth straight month of lower sales figures from 2025, the article points out. So Ford is bringing back "employee pricing" discounts on most new 2025 and 2026 Ford and Lincoln vehicles., while also offering "purchase incentives" of up to $9,000 for 2025 Lightning models and up to $6,000 for 2025 Mustang Mach-Es. "It's also offering EV buyers a free Level 2 home charger, 24/7 live support, and proactive roadside assistance through its Power Promise program."
Read more of this story at Slashdot.
Categories: Linux fréttir
Who, Me? Lab worker built a fake PC to nuke his lunch
WHO, ME? Welcome to a fresh, tasty, instalment of Who, Me? It’s The Register’s reader-contributed column in which readers confess about things they did at work that probably deserve to remain a secret. This week, meet a reader we’ll Regomize as “Ray” who told us he once worked in a research lab repairing nucleonic instruments. We understand they’re gadgets that use very short half-life isotopes that emit just enough radiation it’s possible to measure the backscatter. According to the World Nuclear Association this is helpful to measure the level of coal in a hopper, or the thickness of paper! Like many workplaces, the lab Ray worked in had a microwave oven staff could use to warm their lunches, and a coffee machine too. The difference in this lab was that the appliances lived next to a sink used to wash the nucleonic kit. Ray’s manager decided that posed a risk to workers’ health – which it didn’t – so insisted the Microwave and coffee machine go elsewhere. Ray’s solution was to screenshot his PC’s desktop, print it onto A3 paper, and laminate it. “The screen looked very realistic without requiring a backlight,” he said. So Ray moved it into an unused office and put a keyboard and mouse in front of it. He also found the coffee machine a new home where the manager wouldn’t go looking. “They were both still in use when I retired three years later,” he told Who, Me? Have you found a way to defy the boss and got away with it? If so, click here to send us an email. We’d love the chance to expose readers to your story! ®
Categories: Linux fréttir
Sovereign cloud is only possible if you’re Chinese or American: Gartner
It’s not possible to operate a completely sovereign cloud outside of China or the USA, according to Douglas Toombs, a VP analyst at Gartner. Speaking at the analyst firm’s IT Infrastructure, Operations & Cloud Strategies Conference in Sydney today, Toombs said only the US and China make all the tech needed for a sovereign cloud. Buyers elsewhere can’t avoid relationships with foreign providers. Toombs said that while US-based cloud vendors have created products they say can meet the needs of organizations that need a cloud that doesn’t have legal entanglements outside their chosen jurisdiction, the fact they’re ultimately owned by American corporations means it’s not possible to be certain a cloud provider can promise complete sovereignty. Even on-prem clouds like AWS Outposts, Azure Local, or Oracle’s Dedicated Cloud Regions, “need to phone home,” he said. The analyst doesn’t think attempts to create sovereign clouds will succeed. He mentioned past French attempts to create sovereign clouds named “Andromeda”, “Numergy,” and “Gaia-X”, which he says went nowhere - but did produce some nice white papers. He also cited the The Rule of Three and Four, a maxim developed by Boston Consulting Group that asserts “A stable competitive market never has more than three significant competitors, the largest of which has no more than four times the market share of the smallest,” and argued that it predicts the cloud market has settled around AWS, Google, and Microsoft. Toombs allowed that some smaller clouds could thrive and will make it feasible to create sovereign SaaS providers and products. But he thinks that even aggressive moves to go on-prem won’t free organizations from dependency on US-owned clouds, an assertion he backed with the example of a Dutch healthcare provider that tried to build its own infrastructure but then experienced an outage when a supplier’s services went down along with a major cloud provider. If sovereign clouds fail to develop, it may be problematic because some European organizations are worried US-based cloud operators might leave the continent, forcing them into hasty and risky migration projects, according to Adrian Wong, a Gartner Director Analyst who also spoke in Sydney today. Wong said “heightened geopolitical tensions” are causing customers of major clouds to rethink their strategies, a decision he welcomes because he sees very few organizations bother to develop a cloud exit strategy. “Exit plans are overlooked,” he said, and users are “very much locked in” – especially when they use cloud-native services or platform-as-a-service. “Exiting within a timeframe of anything less than two years takes significant planning and investment,” he warned. “Exit strategies and plans are largely swept under the rug.” Wong says he is now seeing “the pendulum swing.” Not developing a cloud exit strategy is one of the ten big mistakes Wong sees users make. Also on his list are starting use of clouds with mission-critical and complex applications like ERP, assuming the cloud is appropriate for all applications, and expecting to get all the benefits of the cloud with every application. He also said it’s folly to assume that going multi-cloud will improve availability – unless users first tackle the more complex and expensive task of making applications portable. Wong said organizations that use multiple clouds should do so to access specific features of each, not to improve resilience. ®
Categories: Linux fréttir
Open Source Project Shuts Down Over Legal Threats from 3D Printer Company Bambu Lab
The free/open source project OrcaSlicer is a popular fork of 3D printer slicing software from Bambu Lab. But Tuesday independent developer Pawel Jarczak shuttered the project "following legal threats from Bambu Lab," reports Tom's Hardware:
Jarczak's fork of OrcaSlicer would have allowed users to bypass Bambu Connect, a middleware application that severely limits OrcaSlicer's access to remote printer functions in the name of security. Jarczak said in a note on GitHub that Bambu Lab threatened him with a cease and desist letter and accused him of reverse engineering its software in order to impersonate Bambu Studio.
From Bambu Lab's blog post:
Bambu Studio is an open-source project under the AGPL-3.0 license. Anyone can take its code, modify it, and distribute it... That's what OrcaSlicer does, and 734 other forks do as well. We have no issue with that and never have. At the same time, a license for code is not a pass to our cloud infrastructure... Our cloud is a private service. Access to it is governed by a user agreement, not the AGPL license... [T]he modification in question worked by injecting falsified identity metadata into network communication. In simple terms: it pretended to be the official Bambu Studio client when communicating with our servers... If this method were widely adopted or incorrectly configured, thousands of clients could simultaneously hit our servers while impersonating the official client.
"User-Agent is not authentication," counters OrcaSlicer's developer. "It is only self-declared client metadata. Any program can set any User-Agent." And "the User-Agent construction comes directly from Bambu Lab's own public AGPL Bambu Studio code.... So on what basis can anyone claim that I am not allowed to use this specific part of AGPL-licensed code under the AGPL license...? My work was based on publicly available Bambu Studio source code together with my own integration layer."
But the bottom line is that Bambu Lab "contacted me directly and demanded removal of the solution."
I asked whether I could publish the private correspondence in full for transparency. That request was refused... They also referred to legal materials and stated that a cease and desist letter had been prepared...
I removed the repository voluntarily. That removal should not be interpreted as an admission that all legal or technical allegations made against the project were correct. I removed it because I have no interest in maintaining a prolonged dispute around this particular implementation, and no interest in continuing to distribute it.
YouTuber and right-to-repair advocate Louis Rossmann reviewed the correspondence from Bambu Lab — then pledged $10,000 for legal expenses if the developer returned his code online. ("I think that their legal claim is bullshit," Rossman said Saturday in a YouTube video for his 2.5 million subscribers. "I'm not a lawyer, but I'm willing to put my money where my mouth is.")
The video now has over 129,000 views so far. "Rossman has not started a crowdfunding site yet," Tom's Hardware notes, "stating in the comments that he wants to prove to Jarczak that he has supporters willing to put their money where their mouth is. The video had over 129,000 views so far, with commenters vowing to back the case as requested."
Read more of this story at Slashdot.
Categories: Linux fréttir
Most Polymarket Users Lose Money, While Top 1% Claim 76.5% of Gains, Study Finds
In Polymarket's prediction market, "most people end up losing money," reports the Washington Post — typically a few bucks.
"Since Polymarket launched in 2022, a few thousand people have lost the bulk of the money... and an even smaller group — .05 percent of users — has gone home with most of the overall profits, according to a new analysis from finance researcher Pat Akey and colleagues."
A lot of users aren't that good at predicting the future. They're losing money at roughly the same rate as online gamblers betting on sports and other real-life events at traditional sportsbooks, according to the U.K. gambling regulator's analysis of 2024 data. On Polymarket, the odds of making a profit are slightly higher on weather and tech markets — and a little lower on sports...
On Polymarket, just 1,200 people took more than half the profits — $591 million, or more than $100,000 each. ["The top 1% of users capture 76.5% of all trading gains," the researchers write.] When you dabble in prediction markets, you're competing against these sophisticated players who consistently win. Most of those 1,200 big winners didn't place just a few smart bets. They appear to be pros making thousands of trades, mostly in the past year and a half, that were probably automated. One user made $3 million since January on more than a million trades about the Oscars, according to TRM Labs...
The most profitable participants are also just good at picking what to bet on, Akey found, winning so often it was statistically unlikely to be dumb luck. They had some sort of edge — expertise, deep research or, perhaps, inside knowledge.
"Our results suggest that the informational benefits of prediction markets come at a cost to unsophisticated participants," the researchers conclude.
Read more of this story at Slashdot.
Categories: Linux fréttir
