news aggregator
Classic Outlook's Quick Steps trip over Microsoft bug
If you're using Quick Steps in Microsoft Outlook and wondering why they're grayed out, a bug introduced in version 2512 is the culprit. Classic Outlook is approaching the twilight years of its prodigiously long life, but users can still fall victim to productivity-killing bugs – in this case, a problem with Quick Steps. Quick Steps automates common or repetitive tasks in Outlook. Always have to move a bunch of messages to a specific folder? Quick Steps is your friend. Pin an email and mark it as unread? Again, the actions can be lined up in Quick Steps and executed with a single click or a keyboard shortcut. Until Microsoft breaks it. In a support article, Microsoft has confirmed that in some situations, Quick Steps in classic Outlook can appear grayed out. The workaround (if rolling back or switching clients isn't an option) is to use a keyboard shortcut. "The shortcut will work even if the Quick Step is grayed out in the user interface," Microsoft wrote. The problem is that if a Quick Step contains actions that "can't be fulfilled," it's grayed out. Microsoft's own the example states: "A Quick Step that moves a message to a folder and clears categories will be grayed out in messages where there are no categories applied." "This is known to happen with Quick Steps with Flags and Categories actions such as 'Clear flags on message' or 'Clear categories'." Classic Outlook has suffered several glitches of late. Microsoft admitted in April that it could occasionally chow down on system resources for no obvious reason. Then there was its tendency to explode when opening too many emails. Microsoft has been clear that Classic Outlook's days are numbered. Outlook 2024 is due to drop out of mainstream support in 2029. However, there remains much that Classic Outlook does which New Outlook doesn't, such as COM support. And, when Microsoft hasn’t broken them, Quick Steps. ®
Categories: Linux fréttir
Europe wants out from under US tech – but first it has to find the exits
In late December, US Secretary of State Marco Rubio sanctioned former European Commission tech chief Thierry Breton for his role in leading "organized efforts to coerce American platforms to censor, demonetize, and suppress American viewpoints they oppose." The architect of the EU's Digital Services Act (DSA) – a pet hate of the Trump administration – has yet to be deterred. Last month, he joined a chorus of calls for Europe to end its reliance on dominant US tech companies. "The time for an apologetic Europe is over," the former Atos CEO said in a rallying cry that points out we now live in a world "where digital sovereignty has become one of the central arenas of power politics." But what to do about it? US companies hold overwhelming positions in markets including cloud infrastructure and personal productivity tools, to say the least. Breton says Europe has a "constellation of [tech] players that, together, form a considerable base," but offers little explanation of how it might extract itself from the incumbent providers and what the new world might look like. One of his compatriots has, though. Nicolas Roux, systems engineer at French aerospace research lab ONERA, has put together a comprehensive analysis in an attempt to understand which systems might fail first under the kind of pressure the US has already exercised on European institutions and individuals. It also looks at how long they would take to recover and how Europe can reduce its exposure, and which levers – organizational, sectoral, or political – it should pull to ensure better digital sovereignty. The 137-page report is designed for Europe's decision-makers on tech and policy. The details are too numerous to summarize, but it offers a glimpse of some worst-case scenarios as well as cause for optimism. As the report points out, a sense of urgency has gripped European institutions following US sanctions on International Criminal Court (ICC) prosecutor Karim Khan, which led to his Microsoft services being suspended. Microsoft denied responsibility, saying it was the ICC's decision. The Dutch press later reported that the decision was made under duress after Microsoft pointed out that its obligations under the sanctions meant it would have to cut off service to the entire organization unless the ICC removed Khan's access. In March, Henna Virkkunen, Executive Vice-President of the European Commission with responsibility for technological sovereignty, said that Europe's dependence on American technology had become a security concern visible beyond specialist circles. There are so many layers of technology in which the US dominates, with so many interdependencies, any effective move toward digital sovereignty should be based on an understanding of which are the most vulnerable and which are hardest to replace. Roux zeros in on Identity and Access Management (IAM). The US dominates enterprise deployments with few exceptions. "Microsoft, Ping Identity, and IBM as the market's leading operators, with Okta, Oracle Identity Governance, and CyberArk accounting for the majority of remaining enterprise contracts," the report says. "No European vendor appears in any tier of the competitive landscape. For European public administrations, this means that the layer of infrastructure responsible for authenticating every user and authorising every access decision is, in most cases, operated by a vendor incorporated in the United States and subject to American law." Roux points out that Microsoft 365, the service for productivity apps on which nearly all organizations rely, runs the Redmond vendor's Entra ID as its identity provider by default. The report says: "The strategic sensitivity of this layer is compounded by a property it shares with no other: IAM dependency is invisible in normal operations and total in failure. An organization discovers its IAM dependency not when costs increase or performance degrades, but when access is denied it represents an actionable 'kill switch.'" There is a European alternative in Keycloak, but even if a European organization chose to self-host the service on a European cloud, it would not be free from dependencies on US companies, which could be compelled to turn off services under US legislation, the report argues. "What does not hold is inter-organizational authentication. As long as partner organisations (ministries, contractors, other public bodies) operate Entra ID as their identity provider, external authentication chains pass through Microsoft infrastructure by default. Under pressure, the first thing that breaks is the ability to collaborate securely with anyone outside the organisation's own perimeter." There is a gap in the market for a European IAM provider as a fully managed service with the SLA guarantees and support model that public sector organizations can buy through existing procurement vehicles. But to counter the problem with inter-organizational authentication, Europe needs not a product, but a standard – "a shared European public sector identity federation framework, mandatory for public administrations, built on open protocols, and interoperable by design," Roux says. The market for cloud infrastructure and services is overwhelmingly dominated by US providers, which often interlock infrastructure and platform services with other technologies. "The lock-in is architectural: organizations have built dependencies on platform-specific services (Lambda functions, BigQuery pipelines, Azure Cognitive integrations) that have no direct drop-in replacement. Infrastructure can be migrated but application architecture cannot be switched without rethinking," the report says. Nonetheless, there are a bunch of European alternatives on the market. France's OVHcloud and Scaleway are among them, as are German providers Hetzner, IONOS, and STACKIT, owned by retail group Schwarz. It may seem impossible for European providers to replace AWS, with its mammoth scale and buying power, but for Roux, replacing AWS is the answer to the wrong question. "No European provider will replicate the full AWS service catalogue. That catalogue was built over twenty years by a company with access to essentially unlimited capital, operating in a continental domestic market with no regulatory friction. The conditions that produced it do not exist in Europe and will not be manufactured by policy. Asking for a European AWS is asking for a different history. The right question is different: for each layer, what does a given organization actually need, and is a credible European alternative available for that specific need?" The report points out that the most serious gaps are in three areas of cloud services. The first is advanced workloads, such as managed AI/ML pipelines and high-concurrency serverless functions. But the constraint only affects a small minority of public sector organizations and is "an irrelevance for the majority." Secondly, there is scale. OVHcloud's total 2024 revenue is approximately 0.9 percent of the figure AWS publishes. But a coordinated policy of investment at both EU and state level can help close that gap. Lastly, Europe struggles to coordinate services between providers that "operate excellent but largely siloed platforms." Roux says this problem might be solvable "through open standards and interoperability frameworks, but it requires deliberate architectural choices that organizations accustomed to single-vendor convenience are not always prepared to make." Although starting from a low base, the European cloud market is set for rapid growth as investment mirrors geopolitical concerns. European spending on sovereign cloud infrastructure services is forecast to more than triple from 2025 to 2027, from $6.9 billion to $23.1 billion, Gartner reported in February, well ahead of any established region. Speaking to The Register, Rene Buest, Gartner senior director analyst, said European businesses are considering local and regional sovereign cloud providers for new cloud workloads, while they work to understand the complexities of migrating established workloads. This is just a glimpse of the problems – and practical measures – the report outlines. Some of the solutions lie at a policy level by driving demand through public procurement and by creating standards. Breton also sees Europe gaining the upper hand through policy, the single market, and by imposing EU rules on data, competition, algorithmic transparency, and taxation. But continuing to create rules that allow for digital sovereignty can be an uphill struggle in the face of US industry lobbying. Roux quotes the NGOs Corporate Europe Observatory and LobbyControl, which studied the EU Transparency Register. They concluded that the tech industry spent a record €151 million on EU lobbying, a figure that has increased by a third in two years. "Big Tech" employs more full-time lobbyists in Brussels than there are Members of the European Parliament. The European Commission is expected to address parts of the issue through a technological sovereignty package set to arrive at the end of May. It is likely to draw on a €234 billion European competitiveness fund, including a €20 billion package for AI infrastructure, supply chain cybersecurity liability provisions for digital infrastructure, and a strong orientation toward sovereign cloud and open source principles. The hope is that through policy and investment, Europe can get CIOs and tech buyers to overcome the barriers to collective action – that is, "each individual sourcing decision is locally rational, while the aggregate outcome (a continued and deepening operational and economic dependency, in the terms defined above) is collectively irrational." Europe may have been slow to address weaknesses in its digital sovereignty, but it has already proved it has the staying power to take on US might. It took 50 years for a consortium of European aerospace businesses from the UK, France, Germany, and Spain to take on dominant aircraft manufacturer Boeing. In 2023, the number of Airbus aircraft in service surpassed Boeing for the first time. Catherine Jestin, executive vice president of digital at Airbus, told The Register last year that the same could be possible in tech. "It's a long game. And if you look at the way China is approaching it, it takes time. It takes political will and the alignment of the industry," she said. Europe doesn't need to dominate the tech market to ensure its digital sovereignty. It only needs viable alternatives to US providers at each layer of the stack, rather than direct replacements for the biggest suppliers. It will take time, but it will never get there unless it makes a start. As Roux shows, there are those willing to provide a map. ®
Categories: Linux fréttir
The latest innovation in UK public transport: Schrödinger's trains
BORK!BORK!BORK! Guessing games are all the rage, and commuters trying to get home from London Victoria station found themselves flipping a virtual coin to guess the location of their train after Inspector Bork paid a visit to the station's platform board. London Victoria Station is a major transport hub for England's capital city. Trains from the station serve much of the southern part of the country and farther afield. Built around 1860, the station has had various platform display systems over the years. For a long time, the board was of the Solari split-flap type, replete with a delightful clickety-clack sound as destination information was updated. Today's board is a huge digital display which, while undoubtedly more flexible and capable of displaying far more information than the split-flap affair of old, is also susceptible to a visit from the bork fairy. Where the split-flap board might occasionally jam, the digital board could suddenly go inexplicably dark. As happened on May 7, 2026, when Victoria train station was at its busiest. Where platforms, stations, and times were usually listed, there was instead a network error followed by a clock. As such, while the location of trains might have been a mystery for commuters, at least they knew the time. Some travelers, likely tourists, looked confused. Others, probably regular commuters, continued their muscle-memory-propelled trudge toward the platforms. And in the back office? We suspect some frantic clicking of mouse buttons and hammering of keys while a harassed operator tried to work out what had happened to the data. For many passengers, the borked board was symptomatic of how their day had gone. Problems with the trains in the region had made national news, so an apparent admission that nothing was going anywhere was likely the icing on a particularly unpleasant cake. Still, at least the station is not short of places where adult beverages can be bought and consumed. Sometimes that's the best way to deal with a journey on the UK's public transport system, bork or no bork. ®
Categories: Linux fréttir
Taiwan's train cyber-trauma reveals a global system that’s coming off the tracks
OPINION There are three little words to make the heart beat faster in anyone who knows what they mean: critical infrastructure resilience. If you run that infrastructure or a country dependent on it, you need energy, communication and transport to be impregnable to cyber attacks. This is doubly so if that country is five minutes by incoming missile from an implacable hyper-competent enemy sworn to invade you. One that is building and equipping its military as fast as it can with this one thing in mind. One with the most invasive and brazen state hacking machinery on the planet. Thus it was a very bad day indeed when Taiwan’s entire bullet train system was disabled for nearly an hour by an unknown attacker. It got even worse when that attacker turned out not to be the implacable and hyper-resourced state actor over the Taiwan Strait, but a university student with a yen for radio and some kit he bought online. On the one hand, it’s good to see the good repair of the grand tradition of young hackers causing havoc from their bedrooms. On the other, WTRF? The information released by the Taiwanese authorities is scant on details, but enough to be pretty sure what actually happened. It’s bad news not just for Taiwan but for more than 100 countries that also use the TETRA two-way radio standard involved, often for emergency services. In many cases, it was the default replacement for unencrypted FM two-way radios, adding encryption, flexibility and network security. These were state of the art when TETRA was developed in the 1980s and 1990s — and work as well in 2026 as you might expect. Oops. There have been upgrades and, especially after the 2023 vulnerability disclosures, an accelerated program of making things better. A lot of the installed base globally is old, lacks over-the-air updates for security, and in any case spending money on new radios is normally at the bottom of the list for any state or public service organizations. Things have to get really bad first. Perhaps they just have. (North America is the only region where TETRA is uncommon, as it isn’t approved for public service use. This was either acute foresight or the fact that the TE in TETRA, now officially TErrestrial, used to stand for Trans-Europe. The American system, P.25, has never, however, been renamed Freedom Frequencies. Now on with the show) The network vulnerabilities are one side of the story. Our doughty hacker is the other. Reportedly, he didn’t have any TETRA hardware, but a laptop connected to a radio and an ‘SDR filter’. The latter makes little sense, it is far more likely that he had a software defined radio (SDR) called a HackRF. There are plenty of other devices that could have been used, but the HackRF is the weapon of choice for the gung-ho radio nut. SDR is a technique that has completely changed the rules of how to radio. All radios before it had to be entirely or mostly analog, with precision hardware dedicated to whatever job each radio had to do. This hardware could also be looked at as an analog computer, as it can be modelled as a set of mathematical transformations on the received signal. Analog computers have their place, just not in the 21st century. SDR is radio as digital computer. At heart, it has three components: an analog to digital converter to turn the incoming signal to a stream of numbers, very fast processing to do the radio math, and a digital to analogue converter to play the results. What you get is triply terrific. Digital processing is perfect, analog processing adds noise and distortion. Nothing is fixed, everything can be re-engineered with new code. And it can be hog-whimperingly cheap. HackRF is all those things and more. It can be configured as a portable touch-screen device. It transmits and receives from DC to daylight. You can pick one up for less than the price of a mid-range mobile. It is open source. It works with all manner of SDR creation tools, utilities and radio packages. There are infinite legitimate uses. Most excitingly, you can download apps for it that do everything, most especially the kind of thing that will introduce you with surprisingly rapidity to a wide range of new friends with no sense of humor and love letters that look suspiciously like arrest warrants. Think of it as speed dating but with more guns and less no thank yous, GPS spoofing, aviation and marine location transponders, satellite comms, data eavesdropping and injection - take your pick. You’ll need it to unlock the cell door. It is the data detection and injection that seems to have been the downfall of all concerned. A handset had its transmission decoded, and the result was retransmitted into the system as if it were that original radio. Whether the decoded data already had the General Alarm set, or whether the data had to be modified before retransmission, is not yet known. Doesn’t matter. It’s called a replay attack, and it has and is mostly used in stand-alone devices called code grabbers to unlock and steal expensive cars with wireless keys. Some countries, including Canada and the UK, have banned code grabbers, but this has failed on two counts. Code grabbers are small gadgets that can be bought online from China, and good luck policing that. Also, thieves are notably indifferent to laws. That notwithstanding, the UK is thinking of extending the ban to other classes of naughty wireless, and would doubtless like to do the same with HackRF, at least as of last week. Of course, they can’t be banned. SDRs can’t be banned as a class, especially open source ones made out of standard chips and open code. They are general purpose computers, albeit with specialisms. It doesn’t matter if you’re dismayed or delighted that things like HackRF exist, that genie is out of the bottle. What is truly dismaying is that replay attacks are a solved problem, trivially so. Choose a big keyspace, randomize and never repeat keys. That one is on lazy car makers and, apparently, the world of TETRA. Fixing that class of lazy, outdated security vulnerability will be very expensive. Embedded systems are like that, especially old ones. Not fixing this will be a gamble with infinite downside, in a world where electronic warfare systems that used to cost hundreds of millions now pour out of Ali Express for a few bucks. HackRF is to Tetra like Crocodile Dundee’s knife is to the mugger’s. Critical infrastructure resilience. Just three little words, but if you say them you better mean it. And it won’t be cheap. ®
Categories: Linux fréttir
Ford's Electrified Vehicle Sales Dropped 31% in April From One Year Ago
Ford's sales of electrified vehicles — including hybrids and all-electric models — dropped 31% from April 2025, reports Electrek. "Hybrid sales fell 32% to 15,758 vehicles, while EV sales continued to crash with just 3,655 all-electric models sold last month, 25% fewer than in the year prior."
After discontinuing the F-150 Lightning in December, sales of the electric pickup have been in free fall. Ford sold just 884 Lightnings last month, 49% less than it did last April. The Mustang Mach-E isn't doing much better. Sales fell another 9% year over year in April, to just 2,670 models last month. Through the first four months of 2026, Ford's EV sales have fallen 61% from last year, with F-150 Lightning and Mustang Mach-E sales down 67% and 50%, respectively. Ford has sold just over 10,500 electric vehicles in total so far this year... For comparison, Toyota sold just over 10,000 bZ models in the first quarter alone. That's more than Ford's total EV sales in Q1.
April was Ford's fourth straight month of lower sales figures from 2025, the article points out. So Ford is bringing back "employee pricing" discounts on most new 2025 and 2026 Ford and Lincoln vehicles., while also offering "purchase incentives" of up to $9,000 for 2025 Lightning models and up to $6,000 for 2025 Mustang Mach-Es. "It's also offering EV buyers a free Level 2 home charger, 24/7 live support, and proactive roadside assistance through its Power Promise program."
Read more of this story at Slashdot.
Categories: Linux fréttir
Who, Me? Lab worker built a fake PC to nuke his lunch
WHO, ME? Welcome to a fresh, tasty, instalment of Who, Me? It’s The Register’s reader-contributed column in which readers confess about things they did at work that probably deserve to remain a secret. This week, meet a reader we’ll Regomize as “Ray” who told us he once worked in a research lab repairing nucleonic instruments. We understand they’re gadgets that use very short half-life isotopes that emit just enough radiation it’s possible to measure the backscatter. According to the World Nuclear Association this is helpful to measure the level of coal in a hopper, or the thickness of paper! Like many workplaces, the lab Ray worked in had a microwave oven staff could use to warm their lunches, and a coffee machine too. The difference in this lab was that the appliances lived next to a sink used to wash the nucleonic kit. Ray’s manager decided that posed a risk to workers’ health – which it didn’t – so insisted the Microwave and coffee machine go elsewhere. Ray’s solution was to screenshot his PC’s desktop, print it onto A3 paper, and laminate it. “The screen looked very realistic without requiring a backlight,” he said. So Ray moved it into an unused office and put a keyboard and mouse in front of it. He also found the coffee machine a new home where the manager wouldn’t go looking. “They were both still in use when I retired three years later,” he told Who, Me? Have you found a way to defy the boss and got away with it? If so, click here to send us an email. We’d love the chance to expose readers to your story! ®
Categories: Linux fréttir
Sovereign cloud is only possible if you’re Chinese or American: Gartner
It’s not possible to operate a completely sovereign cloud outside of China or the USA, according to Douglas Toombs, a VP analyst at Gartner. Speaking at the analyst firm’s IT Infrastructure, Operations & Cloud Strategies Conference in Sydney today, Toombs said only the US and China make all the tech needed for a sovereign cloud. Buyers elsewhere can’t avoid relationships with foreign providers. Toombs said that while US-based cloud vendors have created products they say can meet the needs of organizations that need a cloud that doesn’t have legal entanglements outside their chosen jurisdiction, the fact they’re ultimately owned by American corporations means it’s not possible to be certain a cloud provider can promise complete sovereignty. Even on-prem clouds like AWS Outposts, Azure Local, or Oracle’s Dedicated Cloud Regions, “need to phone home,” he said. The analyst doesn’t think attempts to create sovereign clouds will succeed. He mentioned past French attempts to create sovereign clouds named “Andromeda”, “Numergy,” and “Gaia-X”, which he says went nowhere - but did produce some nice white papers. He also cited the The Rule of Three and Four, a maxim developed by Boston Consulting Group that asserts “A stable competitive market never has more than three significant competitors, the largest of which has no more than four times the market share of the smallest,” and argued that it predicts the cloud market has settled around AWS, Google, and Microsoft. Toombs allowed that some smaller clouds could thrive and will make it feasible to create sovereign SaaS providers and products. But he thinks that even aggressive moves to go on-prem won’t free organizations from dependency on US-owned clouds, an assertion he backed with the example of a Dutch healthcare provider that tried to build its own infrastructure but then experienced an outage when a supplier’s services went down along with a major cloud provider. If sovereign clouds fail to develop, it may be problematic because some European organizations are worried US-based cloud operators might leave the continent, forcing them into hasty and risky migration projects, according to Adrian Wong, a Gartner Director Analyst who also spoke in Sydney today. Wong said “heightened geopolitical tensions” are causing customers of major clouds to rethink their strategies, a decision he welcomes because he sees very few organizations bother to develop a cloud exit strategy. “Exit plans are overlooked,” he said, and users are “very much locked in” – especially when they use cloud-native services or platform-as-a-service. “Exiting within a timeframe of anything less than two years takes significant planning and investment,” he warned. “Exit strategies and plans are largely swept under the rug.” Wong says he is now seeing “the pendulum swing.” Not developing a cloud exit strategy is one of the ten big mistakes Wong sees users make. Also on his list are starting use of clouds with mission-critical and complex applications like ERP, assuming the cloud is appropriate for all applications, and expecting to get all the benefits of the cloud with every application. He also said it’s folly to assume that going multi-cloud will improve availability – unless users first tackle the more complex and expensive task of making applications portable. Wong said organizations that use multiple clouds should do so to access specific features of each, not to improve resilience. ®
Categories: Linux fréttir
Open Source Project Shuts Down Over Legal Threats from 3D Printer Company Bambu Lab
The free/open source project OrcaSlicer is a popular fork of 3D printer slicing software from Bambu Lab. But Tuesday independent developer Pawel Jarczak shuttered the project "following legal threats from Bambu Lab," reports Tom's Hardware:
Jarczak's fork of OrcaSlicer would have allowed users to bypass Bambu Connect, a middleware application that severely limits OrcaSlicer's access to remote printer functions in the name of security. Jarczak said in a note on GitHub that Bambu Lab threatened him with a cease and desist letter and accused him of reverse engineering its software in order to impersonate Bambu Studio.
From Bambu Lab's blog post:
Bambu Studio is an open-source project under the AGPL-3.0 license. Anyone can take its code, modify it, and distribute it... That's what OrcaSlicer does, and 734 other forks do as well. We have no issue with that and never have. At the same time, a license for code is not a pass to our cloud infrastructure... Our cloud is a private service. Access to it is governed by a user agreement, not the AGPL license... [T]he modification in question worked by injecting falsified identity metadata into network communication. In simple terms: it pretended to be the official Bambu Studio client when communicating with our servers... If this method were widely adopted or incorrectly configured, thousands of clients could simultaneously hit our servers while impersonating the official client.
"User-Agent is not authentication," counters OrcaSlicer's developer. "It is only self-declared client metadata. Any program can set any User-Agent." And "the User-Agent construction comes directly from Bambu Lab's own public AGPL Bambu Studio code.... So on what basis can anyone claim that I am not allowed to use this specific part of AGPL-licensed code under the AGPL license...? My work was based on publicly available Bambu Studio source code together with my own integration layer."
But the bottom line is that Bambu Lab "contacted me directly and demanded removal of the solution."
I asked whether I could publish the private correspondence in full for transparency. That request was refused... They also referred to legal materials and stated that a cease and desist letter had been prepared...
I removed the repository voluntarily. That removal should not be interpreted as an admission that all legal or technical allegations made against the project were correct. I removed it because I have no interest in maintaining a prolonged dispute around this particular implementation, and no interest in continuing to distribute it.
YouTuber and right-to-repair advocate Louis Rossmann reviewed the correspondence from Bambu Lab — then pledged $10,000 for legal expenses if the developer returned his code online. ("I think that their legal claim is bullshit," Rossman said Saturday in a YouTube video for his 2.5 million subscribers. "I'm not a lawyer, but I'm willing to put my money where my mouth is.")
The video now has over 129,000 views so far. "Rossman has not started a crowdfunding site yet," Tom's Hardware notes, "stating in the comments that he wants to prove to Jarczak that he has supporters willing to put their money where their mouth is. The video had over 129,000 views so far, with commenters vowing to back the case as requested."
Read more of this story at Slashdot.
Categories: Linux fréttir
Most Polymarket Users Lose Money, While Top 1% Claim 76.5% of Gains, Study Finds
In Polymarket's prediction market, "most people end up losing money," reports the Washington Post — typically a few bucks.
"Since Polymarket launched in 2022, a few thousand people have lost the bulk of the money... and an even smaller group — .05 percent of users — has gone home with most of the overall profits, according to a new analysis from finance researcher Pat Akey and colleagues."
A lot of users aren't that good at predicting the future. They're losing money at roughly the same rate as online gamblers betting on sports and other real-life events at traditional sportsbooks, according to the U.K. gambling regulator's analysis of 2024 data. On Polymarket, the odds of making a profit are slightly higher on weather and tech markets — and a little lower on sports...
On Polymarket, just 1,200 people took more than half the profits — $591 million, or more than $100,000 each. ["The top 1% of users capture 76.5% of all trading gains," the researchers write.] When you dabble in prediction markets, you're competing against these sophisticated players who consistently win. Most of those 1,200 big winners didn't place just a few smart bets. They appear to be pros making thousands of trades, mostly in the past year and a half, that were probably automated. One user made $3 million since January on more than a million trades about the Oscars, according to TRM Labs...
The most profitable participants are also just good at picking what to bet on, Akey found, winning so often it was statistically unlikely to be dumb luck. They had some sort of edge — expertise, deep research or, perhaps, inside knowledge.
"Our results suggest that the informational benefits of prediction markets come at a cost to unsophisticated participants," the researchers conclude.
Read more of this story at Slashdot.
Categories: Linux fréttir
China’s agentic AI policy wants to keep humans in the loop
China’s Cyberspace Administration last week published draft regulations governing the behavior of AI agents and suggested humans should always retain the ability to review decisions taken by software. The draft expresses Beijing’s enthusiasm for AI agents with a call for efforts to develop datasets that accelerate development, along with security standards that make agents safe to use and ensure they behave ethically. There’s also a call to develop mandatory standards for how agents will behave “in fields such as healthcare, transportation, media, and public safety.” China also wants to participate in international fora that develop such standards. The draft calls for developers of AI agents to “clarify the reasonable boundaries and required authority for various decision-making methods, such as decisions limited to the user, decisions requiring user authorization, and autonomous decisions by the intelligent agent.” Those boundaries should “Ensure that users have the right to know and the final decision-making power regarding the autonomous decisions made by the intelligent agent, and that the intelligent agent's actions do not exceed the scope authorized by the user.” The draft identifies many tasks Beijing thinks agents might take on, including marking homework, analyzing medical images, evaluating employee performance and recommending promotions, helping disaster relief efforts, and even providing “intelligent management of the entire bidding and tendering process, ensuring standardization and efficiency throughout.” Samsung turns off its TV and appliance business in China Korean giant Samsung last week decided to quit China’s TV and appliance markets. “In response to the rapidly changing market environment, after careful consideration, Samsung Electronics has decided to cease sales of all home appliances, including televisions and monitors, in the Chinese mainland market,” states an “adjustment notice” on the Samsung China website. Samsung will honor warranties, and continue to provide after-sales service. The company hasn’t said why it’s quitting these markets in China. The Register expects the reasons have a lot to do with the rise and rise of Chinese consumer electronics companies, which can make a patriotic pitch in addition to pointing out the high quality of their products. Samsung’s not the first to decide it’s too tough to try trading televisions in China: Sony quit the country, too. Thailand approves giant TikTok datacenter The government of Thailand last week approved TikTok’s plan to spend ฿842 billion ($25 billion) on new datacenters in the country. Thailand’s Board of Investment said the project will see TikTok “install additional servers and expand data storage and processing infrastructure across Bangkok, Samut Prakan and Chachoengsao Province, supporting rising demand for digital services and strengthening Thailand’s role in regional digital infrastructure.” The Board also signed off on a 200 MW datacenter to be built by Skyline Data Center and Cloud Services Co, and a 134 MW facility from Bridge Data Centres. Baidu to float its chip biz Chinese web giant Baidu has filed paperwork to spin out its chip design business Kunlunxin. Baidu flagged its plan to do this in January, when it said the aim was to “independently showcase Kunlunxin's value, attract investors focused on the AI chip sector, and leverage its standalone listing to enhance its market profile, broaden financing channels, and better align management accountability with performance.” “This also supports the effort to unlock the value of Baidu's AI-powered businesses.” Kunlunxin’s chips suit inferencing and training workloads, but their performance can’t match Nvidia’s latest chips – or even four-year-old kit like the H100. That hasn’t stopped Baidu using the chips to power its own AI services, and major Chinese corporations also use the company’s chips. Japan and EU to improve tech interoperability The EU-Japan Digital Partnership Council recently convened its annual meeting and last week revealed that talks included “deepened discussions on the joint development and interoperability of data spaces” and promised to keep talking in a new “Data Strategy Working Group” that will “improve the interoperability of data policy frameworks.” The meeting also discussed a successful pilot on interoperable digital identities which apparently “showed that cross-border use is technically possible, even where governance frameworks and technical architectures differ. Using prototypes of digital identity wallets, the project demonstrated how interoperability can be achieved in practice between different systems.” As part of discussions, the EU and Japan agreed to begin working in new areas, including video games and audiovisual strategies. Humanoid robot becomes Buddhist monk Seoul’s Jogye Temple last week allowed a robot named Gabi to take the vows required of a Buddhist monk. Temple leaders reportedly decided to initiate the robot because they feel humanoid machines will soon become a part of everyday life. In February, the President of the Jogye Order, the Most Venerable Jinwoo, said “our lives have become ever more convenient thanks to cutting-edge science and AI. Yet the anxieties, anger, depression, and isolation—mental attachments and sufferings that science cannot resolve— are growing ever deeper.” “This does not mean that Buddhism withdraws from this vast technological civilization,” he said. “Rather, we aim to fearlessly lead the AI era and redirect its achievements toward the path of attaining peace of mind and enlightenment.” “In the age of AI and quantum science, peace of mind will be cultivated through Buddhism.” ®
Categories: Linux fréttir
PlayStation3 Emulator Devs Politely Ask Contributors to Stop Submitting 'AI Slop' Pull Requests
Open-source PS3 emulator RPCS3 "has been around since 2011," Kotaku notes, and has made 70% of the PlayStation 3's library fully playable, "bolstered in part by the many users who contribute to its GitHub page." But their dev team "took to X today to very kindly and civilly request that users 'stop submitting AI slop code pull requests' to its GitHub page."
Then they immediately proceeded to tell the AI-brain-rotted tech bros attempting to justify their vibe-coding nonsense to kick rocks in the replies, which is somewhat less civil but far more entertaining to read...
My favorite one was when someone asked how the team was certain they weren't rejecting human-written code, to which RPCS3 replied: "You can't possibly handwrite the type of shit AI slop we have been seeing."
Read more of this story at Slashdot.
Categories: Linux fréttir
Yes, local LLMs are ready to ease the compute strain
KETTLE We've been experimenting with LLMs for a while here at The Register, and if you ask our systems editor Tobias Mann and senior reporter Tom Claburn, locally installed coding assistants have actually become so good they could relieve some of the compute load that's pushing AI companies to raise their prices. This week on The Kettle, host Brandon Vigliarolo is joined by Mann and Claburn to discuss their work with locally-hosted LLMs, why we're revisiting the topic at all, how to do local LLMs safely, and whether there's orbital relief coming for the compute crunch. You can listen to The Kettle here, as well as on Spotify and Apple Music, or read the full transcript of this episode below. ® --- Brandon (00:01) Welcome back to another episode of The Register's Kettle podcast. I'm Reg reporter Brandon Vigliarolo and with me this week are systems editor Tobias Mann and senior reporter Tom Claburn to talk about some experiments they've been doing with AI coding assistants, but not just any AI coding assistant mind you, we're talking about local ones that live right on your own machine. Guys, thanks for joining me this week. Tobias Mann (00:24) Good to be here. Thomas Claburn (00:25) Thank you. Brandon (00:29) So before we jump into what learned during these experiments and how effective local large language models actually are as coding assistants. Let's talk a bit about why we're having this discussion in the first place. And I understand that AI coding assistants are about to become way more expensive. And I think, Tom, these were stories that you wrote recently. So can you walk us through a bit what's going on with the current cloud-hosted ones? Thomas Claburn (00:52) Back in November, there was, I think around Opus 4.5, pretty much all the developers started to realize that these models were actually getting pretty good and there's no longer, vibe coding was less of a joke and more like, you know, maybe this will work. And then by the time, you know, around February with the OpenClaw craze, was a lot more demand for sort of coding agents and people would start running these for long periods of time. And it sort of caught Anthropic and others unaware, Google and open AI as well. There was a lot of capacity constraints, a lot more people were trying these things out and they ended up having to find ways to limit demand through session limits and made a lot of people unhappy but they basically just didn't have the compute available to serve capacity. And on top of that, they're serving a lot of these at a price that is loss-leading. They're trying to get people into the business, but these are unprofitable workloads for them. And if you look at something like Mythos, which came out, is their big security model, it was too good for anybody, but large companies with expensive payrolls to run. Brandon (02:08) Right, right. Thomas Claburn (02:10) It's clear that they're looking for ways to increase their revenue because they're investing a lot in the infrastructure to make this run, but they don't yet have the recurring revenue that justifies all this. The ramps look good. They're bringing more people on, but they invested a lot of money in this. Brandon (02:29) OpenAI famously has never actually turned a profit in its history. I don't know about Anthropic ⁓ personally, but I can't imagine they're doing a whole lot better. And so I understand the two specific examples you had was that Anthropic recently yanked Claude Code from Pro plans, but only for some people. Is that correct? Thomas Claburn (02:49) Yeah and they wrote that off as an A/B test. Basically they were doing live A/B testing and people noticed and they were saying, oh, well, no, that's doesn't apply to everyone. We're not going to change or take away from existing Pro users. But clearly there are someone there saying, hey, can we get away with charging this much but providing less service? And that doesn't happen unless you're trying to figure out a way to increase your revenue and reduce the demand on your services. Brandon (02:53) Okay. Totally. Did they backtrack on that at all or is that still, is that A/B test still going on? Thomas Claburn (03:23) I don't think it's still going. Tobias Mann (03:24) They do really do do a lot of A/B testing. I think I have a Claude Code Max subscription that is about, has a 50 % discount on it right now. So I'm a little hesitant to give it up because yeah, it's a hundred bucks a month and I don't use it nearly enough to justify that. But also if I cancel and decide I wanted it back, it'd be 200. Brandon (03:46) Yes, the reason I'm still an Nvidia GeForce Now gaming cloud subscriber, right? Because I was there in the beta test and I've never given that discount up, even if I haven't used it in a while. So I understand. Claude did that, Anthropic did that, and then GitHub also has just straight up jumped to metered billing for AI, think. Correct? Thomas Claburn (04:05) Yeah, and they were taking a huge loss on things because they would give you a flat rate, but then people would use the most expensive models. And of course, those things are billed at different rates and offering a flat rate versus these very inflated Opus 4.7 models, which also take a lot longer to process stuff, even if they're a little bit more efficient, they'll think for longer periods. It's just they're losing money. So everyone has to go to meter billing. And once that happens, it's going to cost people a lot of money. You can look at it now, even on a subscription plan, you'll write up a little widget and you look at the thing and it's, you know, $2 worth of whatever. You think, well, is that worth it? Maybe. And then if it's a more substantial project, you know, people spend, you know, hundreds of thousands of dollars on stuff. And if that's not returning you any revenue, are you still going to do that? So it's going to be interesting to see how this goes. Brandon (04:59) Maybe local LLMs like what we're here to talk about today are kind of the market control, right? I'm sure there are gonna be people who are using these paid services, or were at least, that are gonna say, I don't care what the justification is, whether they're trying to make more money, sure, they might deserve to, or whether they just need to reserve compute resources. Either way, I can't afford to pay for this, so I'm going local. Maybe that'll be the cost control, right? Maybe there'll be some balance that kind of equals out there between, we're losing customers, so we got to make this cheaper versus we need to actually get some return on our investment someday. But I guess either way, right, this discussion is kind of is indicative of why we're talking about using local LLMs. Specifically, I believe, coding assistants, which is what the two of you have been kind of spending some time working with. And I understand you've both had success in various ways with this. Let's talk a bit about I guess the one large story you wrote this week about local LLMs and just kind of more broadly what you guys think of them. Tobias Mann (06:05) Many of us on the team have been playing with local LLMs in some shape or fashion for a couple of years now. And probably within the last year, certainly in the last six months, the models that are small enough that you can run on consumer hardware – and I'm not talking cheap consumer hardware, I'm talking about high-end consumer GPUs, quasi-workstation mini PCs, higher-end MacBooks and Macs – the quality of those models have jumped from being kind of like toys, tech demonstrator,s to being really rather competent. At the same time, we've also seen the rise of these agentic coding frameworks. That's the other part of the equation. These are things like Claude Code . Claude Code is a framework thatconnects to models running in Anthropic's various data centers and cloud providers, and is what's actually orchestrating the generation of the code, the testing of the code, the validation of the code, and allowing developers to kind of use these as actually useful tools rather than just getting a code snippet that may or may not work out of a model as you might have done with ChatGPT four years ago. Right around the time that Microsoft was going to usage-based billing and Anthropic was toying around with kicking the $20 a month Pro users off of the Claude Code entirely to save on compute, Alibaba's Qwen team popped in with a relatively small 27 billion parameter LLM Brandon (08:05) Relatively small. I just think it's funny how quick the parameters have grown over the years. Tt's small. It's only a few billion, you know. Tobias Mann (08:08) Yeah,it's only 27 billion. You know, they popped in and they presented this as being frontier-quality coding out of a pretty small model. And so with all of the harnesses you need to do this and now a model that is supposedly competent, it was just kind of the perfect storm so to speak, to start looking into whether or not these small models could be a replacement for some part of the development flow, for the entire development flow. And it's surprising just how good these small models have gotten. Thomas Claburn (08:53) I was experimenting just recently with the Qwen 3.6 and it's like a, whatever, 35 billion parameters ... but it's like a mixture of experts, so it's actually only like 3 billion, I think, when it's running. And it's an 8-bit quantization. And it's actually, it's working pretty speedily. And I was doing a sort of comparison test to see whether it would do a drag-and-drop metadata removal app on a map, which is like a very particular kind of thing. And initially it kind of suggested some things that were wrong. And I sort of cross-checked that with Claude OpenAI and they both came up with things that were like not really right either and then when I sort of rephrased the question to it more carefully, they basically came up with the same answer with Claude. And what it tells me is to your point about the harnesses, I think a lot of the things that makes local coding work is how good the local harness is. And this was a point that came up yesterday in a piece I was working on about Mozilla when they were talking about all the bugs they fixed with Mythos. One of the people I was talking to, Davi Ottenheimer, argued pretty strongly that you can do Mythos-quality work with a much smaller model as long as you have a good harness. Unfortunately, a lot of the setup of that is very kind of...there's not a standard way to do it. So people will either figure out a way that makes it work or they'll set something up and it just doesn't work. But it's not really clear why that happens. And there's a lot of just sort of arcana about like what skills you have and what the pipeline looks like. People are still figuring it out. But I think that local is where it will go because there's nothing that beats the price of being able to run this for next to nothing excluding your very expensive hardware. Brandon (10:59) And it's improving to the point where it's not something that it would have like a while ago, was like, this doesn't really work. Now we're reaching the point where these local models are viable, right? Well, like you said, you've got to word things carefully. I mean, that feels like anything that was the early days of AI, right? It's like, OK, you got to word it carefully. But eventually, it's going to get better to the point where it's not going to have to be so particular. And you get the same results, hopefully. Tobias Mann (11:24) Yeah, there are two key technologies that I think have really helped these smaller models compete. The first is, as Tom mentioned, is mixture-of-expert models. They only use a subset of the total parameter count for each token generated, which reduces the barrier to entry for hardware. The larger the models get, the more memory bandwidth you need in a consumer or even workstation class of product. It gets absurdly expensive as your memory bandwidth requirements increase. Brandon (12:01) Even for doing some of the basic ones here, I think you wrote in your story that the things you need, you need an M5 Mac with 32 gigabytes of memory. Or 24 gigabytes with multiple GPUs. You need a beefy machine from a consumer perspective to run this stuff. I've got an M1 Mac I wonder if I could run some of these. My Mac's pretty fast. I haven't needed to think about upgrading it in several years. And I looked at them and there's no way. Tobias Mann (12:29) So older Macs can do it. You will run into issues where the prompt processing side of it, that's the, hit enter on your prompt and then you wait. It gets to be problematic. Like you're talking several minutes of waiting for it to start generating a response because older Macs lacked the matmul acceleration necessary for this. So they were brute-forcing a lot of the compute on the GPU. Starting with the M5 Max, they integrated the matmul acceleration into the GPU. It makes a huge, huge difference in terms of performance. That's why we recommended newer Macs. Tom and I, I think we're both testing on older M Series Macs. Yes, it can work and especially with the 35 billion parameter mixture-of-experts model, it's a little bit better, but the quality is generally worse than the dense 27 billion parameter model. Brandon (13:39) I guess I can understand that, right? mean, the more processing you can get done the faster, the better the response is. Tobias Mann (13:48) That's a really important part of this because the other piece, the thing that has changed that the models are that small models can be this competitive is something called test time scaling. We saw this first with DeepSeek and OpenAI o1, which is this, you you hit enter on your prompt and then you see the model thinking and the model can work through different paths and then choose which path it wants to present tto the user at the end. So you can, the idea behind test time scaling is that you can take a smaller model and have it think for longer in order to make up for the lack of parameters in that model. And so we have both of those things coming together in models like Qwen 3.6 27B or Qwen 3.6 35B. Brandon (14:30) Okay, cool. Now, I mean, for those who are interested in setting this up and go, okay, I've got some hardware that's beefy enough and I think I'm willing to give this a shot. This has also gotten a lot easier. I think in the past year, year and a half, two years, it's also gotten multiple factors of simplicity easier to actually set up one of these things and run them locally. Is that accurate to say? It seems like it's gotten a lot simpler to configure this. Thomas Claburn (15:10) People often use Ollama or Unsloth, I'm using OMLX, which uses the Mac MLX. And these are basically the model serving platforms. You can get your model from a variety of places. Hugging Face is a very common one. But a lot of the model platforms like Ollama will fetch the model for you and handle all the installation stuff. The trick is a lot of them have different formats. And if you're using Olamma CCP directly on your computer, which is the C-based model runner, it's going to have a different format than say something else. And they'll all talk to each other, but it tends to lock you into one particular way of doing it and you get used to it. There's not really a right way of doing it right now and that's part of the problem is everyone's kind of figuring out what's the right way to do this? Which one do I want to use, how do I configure it? Even just looking at the model and trying to decipher the quantization and the features it has, isn't always clear to everybody. That I think hopefully will become more standardized as you get sort of more common knowledge about, yeah, this one works really well for me. Throughout the forums every week there's someone saying, yeah, this model is great for XYZ and we'll try that out. I mean, that's really the experience you have to have is figure out what you're gonna use it for and try it and see what other people are doing. And you can probably arrive at something that's useful locally. Brandon (16:45) Useful locally, I guess, also implies the need to do some security legwork. Right. I know when we first started writing about local LLMs, things like OpenClaw right. I mean, the the going headline for any of those right was, this this local LLM has caused chaos for somebody again. Right. Is that I think, Tom, you wrote a couple of stories recently about running local LLMs safely. Has it gotten to the point where it's easier to do that safely or is that still going to be a big concern for anyone doing this? Thomas Claburn (17:17) It is easier to do. The setup can be pretty complicated for these anyway. I just spent an evening building a sandbox for the Py agent because Py is sort of a very permissive agent that comes out of the box in YOLO mode. It can sort of do anything. It has very limited command set, but it has very few limitations. And that's by design. It's sort of like in the same way Flask is a very open Python framework. It's not this sort of "batteries included" thing, know, compared to Django. Something like Claude will come with a bunch of sort of predefined ways to do things. Claude has its own sort of sandboxing system and you can add a lot of safety through things like hooks. You know, there people who will write hooks that will intercept dangerous commands like, you know, rm. So there's a lot of ways to do it. Docker has a sandboxing system. That's what I tried to build on is basically figure out a way to do a Docker sandbox that runs Py and it protects the local file system but leaves the internet space open and those are kind of the security decisions you have to make because if this thing is totally enclosed in a VM and there's no way out, it can't really do anything! I mean you can do anything that you stick in the VM, but if you wanted to work on a project on your own system, you have to break that boundary somehow to get the file across and give it access, and then if you need to update something you have to open it up to a code repo somewhere. So there are a lot of security decisions you have to make and for me biggest one was just like making sure it doesn't mess with my local files and that gives me little bit more confidence to run a model that I don't really know how well it will perform. Having my Claude for a long time I'm a little bit more confident that it behaved behaves well, but the risk is there for all of them. Tobias Mann (19:10) So we looked at, I think, three different agent harnesses in the piece. Claude Code, which you would think is for work with Anthropic's stuff, but it works just fine with local models. It's two additional commands and you're up and running. It's very heavy. The system prompt is enormous. And so if you have lesser hardware, you might struggle a little bit with it. We also looked at Cline, which is a VS code extension that is very easy to install, pretty fast to configure. And then we looked at PyCodingAgent, which Tom had suggested that we discuss as well. Out of the box, Cloud Code and Cline both default to user-in-the loop, deny-by-default kind of situations where it'll ask for permission before performing any commands or writing any code. It'll say, "I want to write this code. What do you think? Do you want to proceed?" But they can be made to go fully automatic and just say, you know, I'm not worried, YOLO, let's go. And so thatmodel is a different security model than what we saw with PyCodingAgent, which to Tom's point is just pure YOLO mode out of the box. And so the security models differ wildly depending on which agent harness you're using or which sandbox that you're trying to play in, so to speak. There are several kind of agent sandboxes that have emerged that default to blocking all outbound network activity, which really limits the capabilities of the agent and forces you to be deliberate about what you do and don't want it talking to. Others are just, you know, they're focused on isolating doing kind of limiting the blast radius if the agent decides to go AWOL and do rm, rf, you know, the root file structure and just take the whole thing out. That's fine if it's in the container and it destroys the container because you run two commands and you're back up and running again. It's less okay if you're running bare metal. Brandon (21:32) So security considerations, seems like the core is basically just know what you're working with, right? Like don't deploy an agent that you don't at least have some idea how the security apparatus built into it functions by default, right? And just what you can do with it. But I guess whether we think about security or not, a lot of the conversation around the need to run LLMs locally seems to boil down to compute resources and the cost to maintain them, the cost to operate them, the cost to serve them. And I guess, Anthropic, speaking of Claude, right? Anthropic's big longshot this week, I guess, was a plan or a partnership they signed with SpaceX to occupy some space on the fleet of orbital data centers that Elon Musk seems intent on building. Tom, so is that gonna happen? Thomas Claburn (22:27) [Laughs.] I don't know. I I would think that they would put them in the ocean before they would put them in space. And, you know, they talk about data centers, but I think that it's I'll wait and see if they actually build them on land first, because there's a lot of terrestrial construction that is planned and hasn't happened. And we'll see. Tobias Mann (22:49) Yeah, the whole idea is that in space, you put the satellites in a sun-synchronous orbit, then they have basically unlimited power. The problem is that you have to get them there in the first place, which you need a launch vehicle for, which, last I checked, Starship still does not work. Brandon (23:09) I was gonna say this seems awfully familiar to me if we just change orbital data centers to Mars colonization, right? Like same problem here. We gotta have a vehicle that can get us there yet and we do not. Thomas Claburn (23:21) The Hyperloop will be the way they'll take it out there. Brandon (23:24) Yeah, right. Tobias Mann (23:28) And once we get the orbital cluster in place, Elon wants to put a mass driver on the moon so that we can put even more of these things into deep space for reasons I guess. Brandon (23:42) It just seems like there's a lot of, I don't know, it feels like the idea that Anthropic is gonna get on board with these SpaceX data centers in orbit. It feels to me a lot like when a data center company is like, hey, we just signed a huge deal with this company that makes nuclear reactors that don't exist yet. And it's kind of like, cool guys, well, let us know when we've actually got a real solution for the compute crisis that you guys are dealing with right now that you caused. Thomas Claburn (24:05) I kind of interpret the whole space thing as like, we made a deal with SpaceX and we have to say something nice about their future plans. Brandon (24:18) Right. Yeah. Tobias Mann (24:19) This really boils down to Anthropic is getting access to Colossus One, this massive, what, 150-megawatt AI factory, purpose-built for GPU training and inference. And so I think really what they need is compute and they cannot get enough of it. The inflection point has hit and we're seeing adoption, which means we need compute for inference and we need more compute for inference than we've had in the past. And so I think really what this is, is we'll say whatever you want. We will say that we will ride along on your Starship into the heavens and live in your space data centers. Just give us access to Colossus, please, because we're dying for compute. Brandon (25:15) We need it now and it'd be great if it happened someday in orbit, right? So in the meantime, I guess, basically, have we reached the point where localized AI, local LLM coding agents, right? Are we at the point now where they might be able to ease some of the compute stress that these companies are feeling or is this still early days something that's going to have to be developed, not worth it for the average developer? Thomas Claburn (25:41) I think they're going to be useful for sort of prototyping stuff. One of the things I've done is, I'll run it through the local one and then I'll have Claude check it. You often get a lot of, you know, code fixes that way. So it is a way to offload some less important jobs. I mean, you don't need a frontier model for everything. Brandon (25:49) Right. Right. I think that was kind an argument you made to bias about, you know, using a massive data center to build an HTML page is not a good use of resources. Tobias Mann (26:09) Right, Using the biggest, baddest model to write some HTML is probably not the most efficient thing to do, and it's certainly well within the capabilities of these small models. The other thing I'll say is, if you look at how GPT-5 works, if you go to ChatGPT, not Codex, when you first enter a prompt, it gets routed to one of three models based on the complexity of that model. Conceivably, we could do the same thing with local models, where you sign into Codex, it does a check. If you have sufficient hardware, it will run some portion of that query through the local model, do a yes/no check on the big model in the cloud, and decide whether or not, at that point, whether or not it needs to be regenerated via the API, or it can move forward with what's generated locally. So there's definitely a path forward for local playing a bigger role in reducing the amount of compute required to scale it.... Brandon (27:25) I guess the only key caveat there would be that if you're gonna install local LLMs on people's machines to split your compute load, you should probably let them know first, right Google? Tobias Mann (27:38) You probably should. Brandon (27:43) Probably. Or you can just do it and ask for forgiveness later on. Who's gonna uninstall Chrome? You? Ha ha ha. Tobias Mann (27:49) Yeah, the other thing I would point out is that, while a 24- or 32-gigabyte GPU is very expensive, we're talking anywhere from $1,00 to, you know, $4,000 plus for GPUs with that memory, those GPUs could serve that model to an entire team, realistically. And so if you were thinking about this from an enterprise adoption standpoint, you could buy one machine that sits in the corner, basically silent, that could serve an entire dev team with this smaller model. Or you could spend a whole lot more, but still something that fits on a desktop in the corner that runs a big model, like a trillion-parameter model, locally on that system and for that team. We're not just limited to these small models. You and I might be, but from an enterprise standpoint, a $70,000 DGX Station, for example, is capable of running very large models, trillion-parameter scale models. And that's less than the cost of one developer for a year. Brandon (29:06) Yeah, so maybe that's the case now, right? Maybe we've just reached a point where there's enough value in these local models as a sort of prototyping testbed, as a entry level dev replacement to do the first work before someone more experienced or with more parameters reviews it. Yeah, so it might be there. That's interesting. I will be interested to see how the evolution of AI models and like you said, the kind of linking between cloud-based versus local. I'll be interested to see how that develops. It could be the next phase of the AI industry's evolution. We'll see. We'll see. Something's got to give with compute, right? No matter what it is, we are going to be sure we're here on The Register to write about it and here at the kettle to talk about it. And until then, we will see you next week on the next episode.
Categories: Linux fréttir
Honda Patents a Fake Clutch for Electric Motorcycles
An anonymous reader shared this report from Electrek:
A newly revealed Honda patent shows the company developing a simulated electronic clutch system for electric motorcycles, complete with torque-boost launches and even haptic feedback designed to mimic the feel of a combustion engine.... Instead of using a traditional mechanical clutch, the system uses electronics to alter how the motor responds based on clutch lever position. Pull the clutch halfway in, and the system proportionally reduces motor output. Pull it fully, and power is cut entirely, regardless of throttle position.
But the more interesting part is how Honda intends to recreate the behavior riders actually use clutches for. According to the patent as reported by AMCN, riders could preload the throttle while holding in the clutch lever, then rapidly release the lever to trigger a burst of torque — essentially simulating the hard launches motocross riders rely on with gas bikes. Honda believes that could be useful in competitive riding situations where precise power modulation matters, especially on loose terrain or during aggressive starts.
Honda also appears to be working on recreating the feel of a gas bike, not just the control inputs. The patent describes multiple vibration motors placed in the handlebars and near the clutch lever to provide haptic feedback that simulates engine vibration and even the "bite point" sensation of a clutch engaging. In other words, Honda may be trying to make an electric dirt bike feel mechanically alive, or at least the old-school idea of what a breathing dirt bike used to feel like.
Read more of this story at Slashdot.
Categories: Linux fréttir
Big Tech is Moving Data Through the Gulf Using Fiber-Optic Cables Alongside Iraq's Oil Pipelines
Major American cloud companies with data centers in the Persian Gulf "are channeling data out of the war zone through fiber-optic cables that an Iraqi telecom has strung alongside crude-oil pipelines," reports RestofWorld.org:
The data centers serve customers in more than 190 countries, processing transactions, storing files, and running applications for businesses and individuals from Latin America to South Asia. When Iranian drones struck Amazon's facilities in the United Arab Emirates and Bahrain on March 1, the effects spread across the region. Apps of major banks in the UAE, including Abu Dhabi Commercial Bank, stopped working. Payment and delivery platforms went offline. Snowflake, a U.S. enterprise software company used by thousands of businesses globally, reported Middle East service disruptions tied directly to the Amazon Web Services outage. Amazon told its customers to migrate their workloads out of the Middle East...
[Data from] banking, payment, and enterprise platforms normally travels to Europe through cables running under the Red Sea and the Strait of Hormuz, then connects onward to users across the world. The war has put those cables at risk. The overland route through Iraq is meant to serve as a backup if the sea cables are disabled. The overland route through Iraq is meant to serve as a backup if the sea cables are disabled... [Martin Frank, strategic adviser for IQ Networks, the company that built the network, told Rest of World this overland route is already carrying live traffic.] The company, based in Iraq's Kurdistan region, runs fiber from the southern tip of Iraq to the Turkish border. It is now extending the network through gas-pipeline corridors across Turkey to the European border, with the first link expected early next year, Frank said. When that extension is complete, cloud providers will — for the first time — have the option of an unbroken land-based fiber path from the Gulf into the European network, connecting onward to Frankfurt, Amsterdam, London, and Marseille, from where their data connects back to U.S. users.
The advantage of this alternative route is that oil and gas pipelines come with their own security perimeters, access roads, and maintenance corridors already built around them, allowing a telecom company to lay fiber without digging new trenches through difficult terrain. Iraq avoided the fate of earlier overland routes that collapsed because of a sustained period of stability, and because existing pipeline infrastructure provided ready-made corridors for laying fiber, Doug Madory, director of internet analysis at network intelligence firm Kentik, told Rest of World... IQ Networks' route, called the Silk Route Transit, has been running since November 2023. The network currently carries enough data to stream about 400,000 high-definition videos simultaneously, Frank said.
The land route is faster. Data traveling through submarine cables from the Gulf to Europe takes about 150 milliseconds. The Iraqi terrestrial route cuts that to roughly 70 milliseconds — a difference that matters for video calls, financial transactions, and applications that run on artificial intelligence, according to IQ Networks.
Read more of this story at Slashdot.
Categories: Linux fréttir
Challenging UPS and FedEx, Amazon Opens Its Shipping Network to All Businesses
This week Amazon opened up its parcel shipping, fulfillment, and distribution "to businesses of all types and sizes." Any business can now ship, store, and deliver "using the same supply chain that supports Amazon," according to Monday's announcement of "Amazon Supply Chain Services."
The move sent shares of UPS and FedEx "tumbling" Monday writes GeekWire. And though both stocks bounced back as the week went on, GeekWire sees this as the latest example of Amazon "turning its internal capabilities into products and services for sale..."
"Amazon had already surpassed both carriers to become the nation's largest parcel shipper by volume, according to parcel-analytics firm ShipMatrix."
Initial customers include Procter & Gamble, which is using Amazon's freight network to transport raw materials; 3M, which is using it to move products to distribution centers; Lands' End, which is fulfilling orders across sales channels from Amazon's warehouses; and American Eagle Outfitters, which is using Amazon's parcel service for last-mile delivery. The service can fulfill orders placed through platforms that compete with Amazon's own marketplace, including Walmart, Shopify, TikTok, and others... Peter Larsen, vice president of Amazon Supply Chain Services, compared the launch to the origins of Amazon's cloud business...
In addition to putting Amazon in competition with existing players in the logistics industry, the move also raises questions about data privacy. Amazon has faced accusations of using nonpublic seller data to compete against merchants on its marketplace, which it has denied. Larsen told the Wall Street Journal that the company prohibits using supply chain customer data for its own marketplace decisions, noting that hundreds of thousands of Amazon sellers already trust the company to fulfill orders placed on rival platforms.
The article notes taht in his annual shareholder letter Amazon's CEO "said the company is also exploring selling its custom AI chips and robotics to outside customers."
Read more of this story at Slashdot.
Categories: Linux fréttir
GM Secretly Sold California Drivers' Data, Agrees to Pay $12.75M In Privacy Settlement
"General Motors sold the data of California drivers without their knowledge or consent," says California's attorney general, "and despite numerous statements reassuring drivers that it would not do so."
In 2024, The New York Times "reported that automakers including GM were sharing information about their customers' driving behavior with insurance companies," remembers TechCrunch, "and that some customers were concerned that their insurance rates had gone up as a result."
Now General Motors "has reached a privacy-related settlement with a group of law enforcement agencies led by California Attorney General Rob Bonta..."
The settlement announcement from Bonta's office similarly alleges that GM sold "the names, contact information, geolocation data, and driving behavior data of hundreds of thousands of Californians" to Verisk Analytics and LexisNexis Risk Solutions, which are both data brokers. Bonta's office further alleges that this data was collected through GM's OnStar program, and that the company made roughly $20 million from data sales.
However, Bonta's office also said the data did not lead to increased insurance prices in California, "likely because under California's insurance laws, insurers are prohibited from using driving data to set insurance rates."
As part of the settlement, GM has agreed to pay $12.75 million in civil penalties and to stop selling driving data to any consumer reporting agencies for five years, Bonta's office said. GM has also agreed to delete any driver data that it still retains within 180 days (unless it obtains consent from customers), and to request that Lexis and Verisk delete that data.
"This trove of information included precise and personal location data that could identify the everyday habits and movements of Californians," according to the attorney general's announcement. The settlement "requires General Motors to abandon these illegal practices, and underscores the importance of the data minimization in California's privacy law — companies can't just hold on to data and use it later for another purpose."
"Modern cars are rolling data collection machines," said San Francisco District Attorney Brooke Jenkins. "Californians must have confidence that they know what data is being collected, how it is being used, and what their opt-out rights are... This case sends a strong message that law enforcement will take action when California privacy laws are not scrupulously followed."
Read more of this story at Slashdot.
Categories: Linux fréttir
Amazon Relents, Lets its Programmers Use OpenAI's Codex and Anthropic's Claude
An anonymous reader shared this report from Futurism:
In November, Amazon leaders sent an internal memo to employees, pushing them to use its in-house code generating tool, Kiro, over third-party alternatives from competitors. "While we continue to support existing tools in use today, we do not plan to support additional third party, AI development tools," the memo read, as quoted by Reuters at the time. "As part of our builder community, you all play a critical role shaping these products and we use your feedback to aggressively improve them."
It was an unusual development, considering the tens of billions of dollars the e-commerce giant has invested in its competitors in the space, including Anthropic and OpenAI... Half a year later, Amazon is singing a dramatically different tune. As Business Insider reports, Amazon is officially throwing in the towel, succumbing to growing calls among employees for access to OpenAI's Codex and Anthropic's Claude... Given the unfortunate optics of opening the floodgates for Codex and Claude Code, an Amazon spokesperson told the publication in a statement that teams are still "primarily using" Kiro, claiming that 83 percent of engineers at the company are leaning on it.
Read more of this story at Slashdot.
Categories: Linux fréttir
Rocket Lab Reports Growing Demand for Commercial Space Products. Stock Surges 34%
For just the first three months of 2026, Rocket Lab's launch business reports $63.7 million in revenue, reports CNBC — plus another $136.7 million from its space systems business. Besides beating Wall Street's expectations, Rocket Lab also announced that its backlog has more than doubled from a year ago to $2.2 billion, and that it's buying space robotics company Motiv Space Systems.
Friday its stock price shot up 34% in one day...
Rocket Lab's stock has more than quadrupled over the past year, benefiting from skyrocketing demand for businesses tied to the space economy ahead of SpaceX's hotly anticipated IPO later this year. Demand for space systems and satellites is also escalating as President Donald Trump pursues his ambitious Golden Dome missile defense project and NASA's crewed Artemis missions rev up.
Rocket Lab said Thursday that it signed its largest contract ever with a confidential customer for its Neutron and Electron rockets through 2029, weeks after landing a $190 million deal for 20 hypersonic test flights... "The demand signal is clear," CEO Peter Beck said on an earnings call with analysts, calling the pace of new product releases from the company this year "relentless".... Rocket Lab's good news lifted other space companies. Firefly Aeropspace and Intuitive Machines both jumped more than 20, while Redwire gained 19%. Voyager Technologies rose 14%.
"The company anticipates revenue between $225 million and $240 million during the second quarter."
Read more of this story at Slashdot.
Categories: Linux fréttir
Unemployment Ticked Up in America's IT Sector
IT sector unemployment "increased to 3.8% in April from 3.6% in March," reports the Wall Street Journal.
But they add that the increase reflects "an ongoing uncertainty in tech as AI continues to play havoc with hiring. That's according to analysis from consulting firm Janco Associates, which bases its findings on data from the U.S. Labor Department."
On Friday, the department said the economy added 115,000 jobs, buoyed by gains in industries including retail, transportation and warehousing and healthcare. The unemployment rate was unchanged at 4.3%. But the information sector lost 13,000 jobs in April.
While it's still too early to say exactly how AI is affecting employment overall, some businesses, especially in the tech industry, have said it's part of the reason they're cutting staff. In April, Meta Platforms said it would lay off 10% of its staff, or roughly 8,000 people, as it seeks to streamline operations and pay for its own massive investments in AI. Nike will reduce its workforce by roughly 1,400 workers, or about 2%, mostly in its tech department, as it simplifies global operations. And Snap is planning to eliminate 16% of its workforce, or about 1,000 positions, as it aims to boost efficiency. In other areas of IT, which includes telecommunications and data-processing, employment is now down 11%, or 342,000 jobs, from its most recent peak in November 2022.
But there's not just AI to blame. Inflation and economic uncertainty linked to the Iran conflict is giving some chief executives and tech leaders reason to pull back or pause their IT hiring, said Janco Chief Executive Victor Janulaitis.
The article even notes that postings for software developer jobs "are up 15% year-over-year on job-search platform Indeed, according to Hannah Calhoon, its vice president of AI". But employers do seem to be looking for experienced developers, which could pose a problem for recent college graduates.
Read more of this story at Slashdot.
Categories: Linux fréttir
Unemployed Ticked Up in America's IT Sector
IT sector unemployment "increased to 3.8% in April from 3.6% in March," reports the Wall Street Journal.
But they add that the increase reflects "an ongoing uncertainty in tech as AI continues to play havoc with hiring. That's according to analysis from consulting firm Janco Associates, which bases its findings on data from the U.S. Labor Department."
On Friday, the department said the economy added 115,000 jobs, buoyed by gains in industries including retail, transportation and warehousing and healthcare. The unemployment rate was unchanged at 4.3%. But the information sector lost 13,000 jobs in April.
While it's still too early to say exactly how AI is affecting employment overall, some businesses, especially in the tech industry, have said it's part of the reason they're cutting staff. In April, Meta Platforms said it would lay off 10% of its staff, or roughly 8,000 people, as it seeks to streamline operations and pay for its own massive investments in AI. Nike will reduce its workforce by roughly 1,400 workers, or about 2%, mostly in its tech department, as it simplifies global operations. And Snap is planning to eliminate 16% of its workforce, or about 1,000 positions, as it aims to boost efficiency. In other areas of IT, which includes telecommunications and data-processing, employment is now down 11%, or 342,000 jobs, from its most recent peak in November 2022.
But there's not just AI to blame. Inflation and economic uncertainty linked to the Iran conflict is giving some chief executives and tech leaders reason to pull back or pause their IT hiring, said Janco Chief Executive Victor Janulaitis.
The article even notes that postings for software developer jobs "are up 15% year-over-year on job-search platform Indeed, according to Hannah Calhoon, its vice president of AI". But employers do seem to be looking for experienced developers, which could pose a problem for recent college graduates.
Read more of this story at Slashdot.
Categories: Linux fréttir
