TheRegister

Subscribe to TheRegister feed
Articles from www.theregister.com
Updated: 1 hour 23 min ago

Google says criminals used AI-built zero-day in planned mass hack spree

Mon, 2026-05-11 13:38
Google says crooks already have AI cooking up zero-days, and claims one nearly escaped into the wild before the company stopped it. In a report shared with The Register ahead of publication on Monday, Google’s Threat Intelligence Group said that it has identified what it believes is the first real-world case of cyber-baddies using AI to discover and weaponize a zero-day vulnerability in a planned mass-exploitation campaign. The bug, a two-factor authentication bypass in a popular open source web-based administration platform, was reportedly developed by criminals working together on a large-scale intrusion operation. GTIG said that the attackers appear to have used an AI model to both identify the flaw and help turn it into a usable exploit. Google worked with the unnamed vendor to quietly patch the issue before the campaign could properly kick off, which it believes may have disrupted the operation before it gained traction. The company insists that neither Gemini nor Anthropic’s Mythos was involved, but said that the exploit itself looked suspiciously machine-made. According to the report, the Python script included what Google described as "educational docstrings," a hallucinated CVSS score, and a polished textbook coding structure that looked heavily influenced by LLM training data. Google said that the issue stemmed from developers hard-coding a trust exception into the authentication flow, creating a hole that attackers could exploit to sidestep 2FA checks. According to the firm, those higher-level logic mistakes are exactly the kind of thing modern AI models are starting to get surprisingly good at finding. "While fuzzers and static analysis tools are optimized to detect sinks and crashes, frontier LLMs excel at identifying these types of high-level flaws and hardcoded static anomalies," the report said. John Hultquist, chief analyst at Google Threat Intelligence Group, said anyone still treating AI-assisted vulnerability discovery as a future problem is already behind. "There’s a misconception that the AI vulnerability race is imminent. The reality is that it’s already begun. For every zero-day we can trace back to AI, there are probably many more out there," Hultquist said. "Threat actors are using AI to boost the speed, scale, and sophistication of their attacks. It enables them to test their operations, persist against targets, build better malware, and make many other improvements. State actors are taking advantage of this technology but the criminal threat shouldn’t be underestimated, especially given their history of broad, aggressive attacks." Google’s report suggests that the zero-day case is part of something much bigger. GTIG said North Korean crew APT45 had been using AI to churn through thousands of exploit checks and bulk out its toolkit, while Chinese state-linked operators were experimenting with AI systems for vulnerability hunting and automated probing of targets. Google also described malware families padded out with AI-generated junk code designed to confuse analysts, Android backdoors using Gemini APIs to autonomously navigate infected devices, and Russian influence operations stitching fabricated AI-generated audio into legitimate news footage. The awkward bit for everyone else is that this still appears to be the clumsy early phase. Google said mistakes in the exploit’s implementation probably interfered with the criminals’ plans this time around, but that may not stay true for long. ®
Categories: Linux fréttir

SoftBank bets on battery building to back bit barns

Mon, 2026-05-11 13:37
SoftBank is getting into the datacenter battery business and plans to start manufacturing them on the scale of gigawatt-hours per year of capacity to support the power needs of AI infrastructure, including its own. The Japan-based tech investment biz says it aims to deploy the battery systems it is developing at its own large-scale AI server farms initially, but plans to make them more widely available in future. It hopes to begin mass production in financial year 2027, and expects the operation to generate revenue of ¥100 billion (over $600 million) per year by 2030. SoftBank is working with two South Korean firms that have a track record in advanced battery-related technologies. One is Cosmos Lab, developer of zinc-halogen batteries that use pure water as an electrolyte, making them non-flammable, and the other is DeltaX, which designs and manufactures battery-based energy storage systems (BESS). Reg readers may recall that SoftBank last year bought the rights to a former Sharp LCD panel factory in Sakai City, Osaka prefecture in Japan, and said it planned to convert it into a datacenter to operate AI agents developed jointly with ChatGPT creator OpenAI. The site will now become an industrial cluster, home to its battery manufacturing facility as well. SoftBank referred to it as a core hub to establish its AX Factory (a center for datacenter operations and AI infrastructure hardware manufacturing), and GX Factory (serving as a manufacturing facility for next-gen batteries, solar panels, and related products). One detail missing is how much cash the investment biz is pouring into this venture. We asked how much the project is costing to get off the ground, but a SoftBank spokesperson told us it was not able to comment. SoftBank plans to start by deploying the battery systems produced at its GX Factory in its own server halls, but will then provide them for grid applications in Japan, plus factories and other industrial uses. It hopes to take the technology into global markets over the medium term. In presentation slides seen by The Register, the firm says BESS for commercial and industrial use will have a capacity of 140 kWh to 560 kWh, while those for large-scale or grid-scale use will come in at 2,240 kWh to 5,380 kWH. According to SoftBank, DeltaX has developed BESS capable of energy densities exceeding 5 MWh in a standard commercial container format (a 20-foot shipping container). The way DeltaX packs together and connects the battery cells in its BESS maximizes their performance, Softbank claims, and by applying these technologies to next-generation battery cells (presumably referring to those of Cosmos Lab), further improvements in energy storage can be achieved. Those battery cells, which SoftBank calls Innovative batteries, use a halogen-based material for the cathode and zinc for the anode, which it says offers charge-discharge characteristics with minimal energy loss and energy efficiency comparable to existing lithium-ion batteries. As they use pure water as the electrolyte, SoftBank claims these batteries are inherently safer and won't catch fire, unlike lithium-ion batteries, which have a well-documented tendency to do exactly that. SoftBank has its finger in a number of pies when it comes to AI projects. The firm was aiming to pump $22.5 billion into LLM developer OpenAI before the end of 2025, and more recently announced plans for a massive 10 GW datacenter campus on US Department of Energy (DoE) land in Ohio. The company is also majority shareholder of chip designer Arm, which recently revealed its first Arm-branded datacenter processor targeting AI, and owns Ampere Computing, which makes Arm-based server chips. ®
Categories: Linux fréttir

Water company's leaky security earns near-£1M fine

Mon, 2026-05-11 12:52
The UK's data protection watchdog has fined South Staffordshire Water's parent company nearly £1 million over security failings exposed by the Cl0p ransomware attack in 2022. Issuing the fine of £963,900 ($1.3 million), the Information Commissioner's Office (ICO) said the attack exposed "significant failures in the company's approach to data security." The attack, claimed by Cl0p, was detected in July 2022 after engineers responded to performance issues, but a thorough postmortem revealed the initial intrusion occurred almost two years earlier, in September 2020. Among the key failures that led to the attack, and the nearly two-year delay in detecting it, were: Limited controls, which allowed the attacker to escalate their privileges to admin after gaining an initial foothold on the network Inadequate monitoring and logging. The ICO noted that only 5 percent of South Staffordshire's IT environment was being monitored Running unsupported software, including Windows Server 2003 Poor vulnerability management. Investigations showed critical systems were unpatched against known vulnerabilities, and the company failed to regularly run internal or external security scans The ICO said 633,887 people were affected by the attack and the resulting leak of company files. For customers, this included personally identifiable information, usernames and passwords used to access its online services, and bank account numbers and sort codes. For a limited number of customers on the utility company's Priority Services Register, the stolen information could have led to their disabilities being inferred. Cl0p also pilfered HR information, including employees' National Insurance numbers. The trove of company data was later leaked online in a file exceeding 4 TB. At the time of the attack, South Staffordshire handled the data of some 1.85 million individuals. Most of these were either current or former customers, but several thousand staffers' details were also retained. "Customers do not have the choice over which water company serves them – they are required to share their personal information and place their trust in that provider," said Ian Hulme, interim executive director for regulatory supervision at the ICO. "It is therefore essential that water companies honor that trust by taking their data protection responsibilities seriously." "The steps that South Staffordshire failed to take are established, widely understood and effective controls to protect computer networks. The ICO expects all organizations – and particularly those handling large volumes of personal information as part of critical national infrastructure – to have these in place." "Waiting for performance issues or a ransom note to discover a breach is not acceptable. Proactive security is a legal requirement, not an optional extra." The ICO announced its intent to fine South Staffordshire in December 2025. The regulator said after reviewing the company's representations, which included agreement with its findings and an early admission of wrongdoing, it reduced the fine by 40 percent. "We accept the Information Commissioner's Office's decision relating to the cyberattack our Group experienced in 2022, and are sorry for the worry and concern it caused for customers and employees," said Charley Maher, group CEO at South Staffordshire Plc, in a statement provided to The Register. "We took immediate action to contain the incident, support those impacted, and reduce the risk of recurrence." "We have invested significantly to further strengthen our cybersecurity resilience, governance, and monitoring, and we continue to enhance our capabilities as the threat landscape evolves. Protecting customer and employee information is a responsibility we take extremely seriously, and we remain focused on learning from this incident and maintaining strong safeguards across the Group." ®
Categories: Linux fréttir

Checkmarx tackles another TeamPCP intrusion as Jenkins plugin sabotaged

Mon, 2026-05-11 12:11
Checkmarx’s software engineers are still working to remove a malicious version of the code security outfit's Jenkins plugin after detecting an unauthorized upload over the weekend. It updated customers on Saturday, May 9, after discovering a version of its AST Scanner, which is used for security scans in Jenkins CI pipelines, was made available via the Jenkins Marketplace. “We are aware that a modified version of the Checkmarx Jenkins AST plugin was published to the Jenkins Marketplace,” it said in a statement. “We are in the process of publishing a new version of this plug-in.” Versions published as of May 9, 2026, should not be trusted, it added, before urging all users to check they’re running the correct release (2.0.13-829.vc72453fa_1c16) published on December 17, 2025. Installed by several hundred controllers, the plugin remains available at the time of writing, and appears as the most recently available version, although pull requests actioned on Monday morning suggest this will soon be pulled down. “What makes this particularly dangerous for Jenkins users is the trust model at play,” said SOCRadar in its coverage. “The Checkmarx Jenkins plugin is a tool people install specifically to improve the security of their pipelines. “A backdoored version doesn’t just compromise one project; it rides trusted infrastructure into every build pipeline it touches, with access to source code, environment variables, tokens, and whatever secrets the runner can see.” Security engineer Adnan Khan spotted the compromise quickly over the weekend. The crew behind the early supply chain attack affecting Checkmarx in April, TeamPCP, defaced the company’s GitHub and published six packages, each with a description alluding to the Shai-Hulud wormable malware. These packages no longer appear on Checkmarx’s GitHub, but TeamPCP made multiple changes to the AST plugins page, renaming it to “Checkmarx-Fully-Hacked-by-TeamPCP-and-Their-Customers-Should-Cancel-Now,” and altering the description to claim CheckMarx failed to rotate its secrets. The latest infiltration of Checkmarx’s internals marks the third time TeamPCP has compromised the company’s packages in as many months. As previously seen in The Register, the crooks successfully targeted Checkmarx’s AST plugin for GitHub Actions and its KICS static analysis tool back in March, deploying credential-stealing malware. SOCRadar said the latest TeamPCP compromise of the Jenkins plugin suggests that either TeamPCP was telling the truth about Checkmarx’s secrets rotation, or its members took advantage of an additional persistence mechanism that the security vendor failed to notice during its response to the March intrusion. ®
Categories: Linux fréttir

NASA's bid to save Swift from fiery death passes another hurdle

Mon, 2026-05-11 11:52
A rescue mission for NASA's Neil Gehrels Swift Observatory has taken another step forward following the completion of environmental tests at the agency's Goddard Space Flight Center. The purpose of the tests was to assess how the LINK robotic servicing spacecraft, supplied by Katalyst Space Technologies, would withstand the forces of launch and the extremes of the orbital environment. The mission is ambitious and fast-paced. It was only in August 2025 that NASA asked US industry for ideas on rescuing the observatory, whose orbit is decaying faster than expected. Katalyst was awarded the contract and has been working against the clock to launch its servicing spacecraft before Swift reaches the point of no return. In February 2026, NASA ended most science operations aboard Swift to keep the spacecraft in orbit long enough for the rescue mission. At the time, June 2026 was Katalyst's expected launch date and, thanks to the successful completion of testing, the mission remains on track. The next step is for Northrop Grumman to integrate LINK into its Pegasus rocket in early June, with launch planned from the last airworthy L-1011 TriStar (dubbed Stargazer) later that month. The LINK spacecraft has undergone vibration testing to simulate a Pegasus launch and thermal-vacuum testing in Goddard's Space Environment Simulator, where it experienced space-like hot and cold temperature extremes. The team also test-fired the spacecraft's three xenon-powered ion thrusters and deployed one of its robotic arms. Kieran Wilson, LINK's principal investigator at Katalyst, said: "We're in an unusual situation where the schedule dictates how much risk we’re willing to accept, rather than the other way around. "The clock is ticking on Swift's descent, so we have to find a balance between testing and problem solving that gives the mission the best chance of success." After paying tribute to the speed at which Katalyst was moving, Swift mission director John Van Eepoel said: "Swift will likely re-enter the atmosphere sometime later this year if we don't attempt to lift it to a higher altitude." In this instance, the Swift observatory has nothing to lose and everything to gain from the reboost mission. The spacecraft is more than 20 years into a two-year task to study gamma-ray bursts. If it weren't for its decaying orbit (and the Trump administration's effort to terminate it - the mission was on the chopping block in the FY2026 budget proposal - it could continue observations for years to come. ®
Categories: Linux fréttir

Linux kernel maintainers pitch emergency killswitch after CopyFail and Dirty Frag chaos

Mon, 2026-05-11 11:16
Linux kernel maintainers are considering giving admins a giant red emergency button to smash the next time another nasty vulnerability drops before patches are ready. The proposed feature, named "Killswitch," would let admins temporarily disable specific vulnerable kernel functions at runtime instead of sitting around waiting for fixes. The so-called patch was submitted by Linux stable kernel co-maintainer and Nvidia engineer Sasha Levin after a bruising couple of weeks for Linux security. The proposal basically gives admins a way to pull the plug on vulnerable kernel functionality. If exploit code starts spreading before patches arrive, the targeted function can be disabled so calls to it immediately fail instead of reaching the vulnerable code. "When a (security) issue goes public, fleets stay exposed until a patched kernel is built, distributed, and rebooted into," Levin wrote. "For many such issues the simplest mitigation is to stop calling the buggy function. Killswitch provides that." The past couple of weeks have not exactly been great advertising for the traditional "wait for patches" approach. First we saw the disclosure of CopyFail, a Linux local privilege escalation bug that quickly moved from disclosure to active exploitation. Days later, Dirty Frag emerged: another Linux privilege escalation flaw with public exploit code and no official fixes, after coordinated disclosure efforts fell apart before patches were ready. As Levin's proposal itself puts it, organizations are often left exposed "until a patched kernel is built, distributed, and rebooted into." Killswitch aims to fill that gap. Killswitch would work through the kernel's security interface and is mainly intended for subsystems that systems can survive without for a while. In practical terms, Levin's argument is that temporarily losing some networking or crypto functionality is preferable to leaving known vulnerable code exposed on production systems. However, the feature would not fix vulnerable code or replace it with safe code. It just slams the door shut on the dangerous bit until administrators can properly update their kernels. Naturally, handing sysadmins the ability to selectively shoot pieces of the kernel in the head has already sparked debate among developers over stability, potential for abuse, and whether people can be trusted not to accidentally saw off important limbs in production. Still, after CopyFail and Dirty Frag, the kernel community increasingly seems to be arriving at the conclusion that running broken functionality may now be preferable to running weaponized functionality. ®
Categories: Linux fréttir

Classic Outlook's Quick Steps trip over Microsoft bug

Mon, 2026-05-11 10:28
If you're using Quick Steps in Microsoft Outlook and wondering why they're grayed out, a bug introduced in version 2512 is the culprit. Classic Outlook is approaching the twilight years of its prodigiously long life, but users can still fall victim to productivity-killing bugs – in this case, a problem with Quick Steps. Quick Steps automates common or repetitive tasks in Outlook. Always have to move a bunch of messages to a specific folder? Quick Steps is your friend. Pin an email and mark it as unread? Again, the actions can be lined up in Quick Steps and executed with a single click or a keyboard shortcut. Until Microsoft breaks it. In a support article, Microsoft has confirmed that in some situations, Quick Steps in classic Outlook can appear grayed out. The workaround (if rolling back or switching clients isn't an option) is to use a keyboard shortcut. "The shortcut will work even if the Quick Step is grayed out in the user interface," Microsoft wrote. The problem is that if a Quick Step contains actions that "can't be fulfilled," it's grayed out. Microsoft's own the example states: "A Quick Step that moves a message to a folder and clears categories will be grayed out in messages where there are no categories applied." "This is known to happen with Quick Steps with Flags and Categories actions such as 'Clear flags on message' or 'Clear categories'." Classic Outlook has suffered several glitches of late. Microsoft admitted in April that it could occasionally chow down on system resources for no obvious reason. Then there was its tendency to explode when opening too many emails. Microsoft has been clear that Classic Outlook's days are numbered. Outlook 2024 is due to drop out of mainstream support in 2029. However, there remains much that Classic Outlook does which New Outlook doesn't, such as COM support. And, when Microsoft hasn’t broken them, Quick Steps. ®
Categories: Linux fréttir

Europe wants out from under US tech – but first it has to find the exits

Mon, 2026-05-11 10:00
In late December, US Secretary of State Marco Rubio sanctioned former European Commission tech chief Thierry Breton for his role in leading "organized efforts to coerce American platforms to censor, demonetize, and suppress American viewpoints they oppose." The architect of the EU's Digital Services Act (DSA) – a pet hate of the Trump administration – has yet to be deterred. Last month, he joined a chorus of calls for Europe to end its reliance on dominant US tech companies. "The time for an apologetic Europe is over," the former Atos CEO said in a rallying cry that points out we now live in a world "where digital sovereignty has become one of the central arenas of power politics." But what to do about it? US companies hold overwhelming positions in markets including cloud infrastructure and personal productivity tools, to say the least. Breton says Europe has a "constellation of [tech] players that, together, form a considerable base," but offers little explanation of how it might extract itself from the incumbent providers and what the new world might look like. One of his compatriots has, though. Nicolas Roux, systems engineer at French aerospace research lab ONERA, has put together a comprehensive analysis in an attempt to understand which systems might fail first under the kind of pressure the US has already exercised on European institutions and individuals. It also looks at how long they would take to recover and how Europe can reduce its exposure, and which levers – organizational, sectoral, or political – it should pull to ensure better digital sovereignty. The 137-page report is designed for Europe's decision-makers on tech and policy. The details are too numerous to summarize, but it offers a glimpse of some worst-case scenarios as well as cause for optimism. As the report points out, a sense of urgency has gripped European institutions following US sanctions on International Criminal Court (ICC) prosecutor Karim Khan, which led to his Microsoft services being suspended. Microsoft denied responsibility, saying it was the ICC's decision. The Dutch press later reported that the decision was made under duress after Microsoft pointed out that its obligations under the sanctions meant it would have to cut off service to the entire organization unless the ICC removed Khan's access. In March, Henna Virkkunen, Executive Vice-President of the European Commission with responsibility for technological sovereignty, said that Europe's dependence on American technology had become a security concern visible beyond specialist circles. There are so many layers of technology in which the US dominates, with so many interdependencies, any effective move toward digital sovereignty should be based on an understanding of which are the most vulnerable and which are hardest to replace. Roux zeros in on Identity and Access Management (IAM). The US dominates enterprise deployments with few exceptions. "Microsoft, Ping Identity, and IBM as the market's leading operators, with Okta, Oracle Identity Governance, and CyberArk accounting for the majority of remaining enterprise contracts," the report says. "No European vendor appears in any tier of the competitive landscape. For European public administrations, this means that the layer of infrastructure responsible for authenticating every user and authorising every access decision is, in most cases, operated by a vendor incorporated in the United States and subject to American law." Roux points out that Microsoft 365, the service for productivity apps on which nearly all organizations rely, runs the Redmond vendor's Entra ID as its identity provider by default. The report says: "The strategic sensitivity of this layer is compounded by a property it shares with no other: IAM dependency is invisible in normal operations and total in failure. An organization discovers its IAM dependency not when costs increase or performance degrades, but when access is denied it represents an actionable 'kill switch.'" There is a European alternative in Keycloak, but even if a European organization chose to self-host the service on a European cloud, it would not be free from dependencies on US companies, which could be compelled to turn off services under US legislation, the report argues. "What does not hold is inter-organizational authentication. As long as partner organisations (ministries, contractors, other public bodies) operate Entra ID as their identity provider, external authentication chains pass through Microsoft infrastructure by default. Under pressure, the first thing that breaks is the ability to collaborate securely with anyone outside the organisation's own perimeter." There is a gap in the market for a European IAM provider as a fully managed service with the SLA guarantees and support model that public sector organizations can buy through existing procurement vehicles. But to counter the problem with inter-organizational authentication, Europe needs not a product, but a standard – "a shared European public sector identity federation framework, mandatory for public administrations, built on open protocols, and interoperable by design," Roux says. The market for cloud infrastructure and services is overwhelmingly dominated by US providers, which often interlock infrastructure and platform services with other technologies. "The lock-in is architectural: organizations have built dependencies on platform-specific services (Lambda functions, BigQuery pipelines, Azure Cognitive integrations) that have no direct drop-in replacement. Infrastructure can be migrated but application architecture cannot be switched without rethinking," the report says. Nonetheless, there are a bunch of European alternatives on the market. France's OVHcloud and Scaleway are among them, as are German providers Hetzner, IONOS, and STACKIT, owned by retail group Schwarz. It may seem impossible for European providers to replace AWS, with its mammoth scale and buying power, but for Roux, replacing AWS is the answer to the wrong question. "No European provider will replicate the full AWS service catalogue. That catalogue was built over twenty years by a company with access to essentially unlimited capital, operating in a continental domestic market with no regulatory friction. The conditions that produced it do not exist in Europe and will not be manufactured by policy. Asking for a European AWS is asking for a different history. The right question is different: for each layer, what does a given organization actually need, and is a credible European alternative available for that specific need?" The report points out that the most serious gaps are in three areas of cloud services. The first is advanced workloads, such as managed AI/ML pipelines and high-concurrency serverless functions. But the constraint only affects a small minority of public sector organizations and is "an irrelevance for the majority." Secondly, there is scale. OVHcloud's total 2024 revenue is approximately 0.9 percent of the figure AWS publishes. But a coordinated policy of investment at both EU and state level can help close that gap. Lastly, Europe struggles to coordinate services between providers that "operate excellent but largely siloed platforms." Roux says this problem might be solvable "through open standards and interoperability frameworks, but it requires deliberate architectural choices that organizations accustomed to single-vendor convenience are not always prepared to make." Although starting from a low base, the European cloud market is set for rapid growth as investment mirrors geopolitical concerns. European spending on sovereign cloud infrastructure services is forecast to more than triple from 2025 to 2027, from $6.9 billion to $23.1 billion, Gartner reported in February, well ahead of any established region. Speaking to The Register, Rene Buest, Gartner senior director analyst, said European businesses are considering local and regional sovereign cloud providers for new cloud workloads, while they work to understand the complexities of migrating established workloads. This is just a glimpse of the problems – and practical measures – the report outlines. Some of the solutions lie at a policy level by driving demand through public procurement and by creating standards. Breton also sees Europe gaining the upper hand through policy, the single market, and by imposing EU rules on data, competition, algorithmic transparency, and taxation. But continuing to create rules that allow for digital sovereignty can be an uphill struggle in the face of US industry lobbying. Roux quotes the NGOs Corporate Europe Observatory and LobbyControl, which studied the EU Transparency Register. They concluded that the tech industry spent a record €151 million on EU lobbying, a figure that has increased by a third in two years. "Big Tech" employs more full-time lobbyists in Brussels than there are Members of the European Parliament. The European Commission is expected to address parts of the issue through a technological sovereignty package set to arrive at the end of May. It is likely to draw on a €234 billion European competitiveness fund, including a €20 billion package for AI infrastructure, supply chain cybersecurity liability provisions for digital infrastructure, and a strong orientation toward sovereign cloud and open source principles. The hope is that through policy and investment, Europe can get CIOs and tech buyers to overcome the barriers to collective action – that is, "each individual sourcing decision is locally rational, while the aggregate outcome (a continued and deepening operational and economic dependency, in the terms defined above) is collectively irrational." Europe may have been slow to address weaknesses in its digital sovereignty, but it has already proved it has the staying power to take on US might. It took 50 years for a consortium of European aerospace businesses from the UK, France, Germany, and Spain to take on dominant aircraft manufacturer Boeing. In 2023, the number of Airbus aircraft in service surpassed Boeing for the first time. Catherine Jestin, executive vice president of digital at Airbus, told The Register last year that the same could be possible in tech. "It's a long game. And if you look at the way China is approaching it, it takes time. It takes political will and the alignment of the industry," she said. Europe doesn't need to dominate the tech market to ensure its digital sovereignty. It only needs viable alternatives to US providers at each layer of the stack, rather than direct replacements for the biggest suppliers. It will take time, but it will never get there unless it makes a start. As Roux shows, there are those willing to provide a map. ®
Categories: Linux fréttir

The latest innovation in UK public transport: Schrödinger's trains

Mon, 2026-05-11 09:15
BORK!BORK!BORK! Guessing games are all the rage, and commuters trying to get home from London Victoria station found themselves flipping a virtual coin to guess the location of their train after Inspector Bork paid a visit to the station's platform board. London Victoria Station is a major transport hub for England's capital city. Trains from the station serve much of the southern part of the country and farther afield. Built around 1860, the station has had various platform display systems over the years. For a long time, the board was of the Solari split-flap type, replete with a delightful clickety-clack sound as destination information was updated. Today's board is a huge digital display which, while undoubtedly more flexible and capable of displaying far more information than the split-flap affair of old, is also susceptible to a visit from the bork fairy. Where the split-flap board might occasionally jam, the digital board could suddenly go inexplicably dark. As happened on May 7, 2026, when Victoria train station was at its busiest. Where platforms, stations, and times were usually listed, there was instead a network error followed by a clock. As such, while the location of trains might have been a mystery for commuters, at least they knew the time. Some travelers, likely tourists, looked confused. Others, probably regular commuters, continued their muscle-memory-propelled trudge toward the platforms. And in the back office? We suspect some frantic clicking of mouse buttons and hammering of keys while a harassed operator tried to work out what had happened to the data. For many passengers, the borked board was symptomatic of how their day had gone. Problems with the trains in the region had made national news, so an apparent admission that nothing was going anywhere was likely the icing on a particularly unpleasant cake. Still, at least the station is not short of places where adult beverages can be bought and consumed. Sometimes that's the best way to deal with a journey on the UK's public transport system, bork or no bork. ®
Categories: Linux fréttir

Taiwan's train cyber-trauma reveals a global system that’s coming off the tracks

Mon, 2026-05-11 08:30
OPINION There are three little words to make the heart beat faster in anyone who knows what they mean: critical infrastructure resilience. If you run that infrastructure or a country dependent on it, you need energy, communication and transport to be impregnable to cyber attacks. This is doubly so if that country is five minutes by incoming missile from an implacable hyper-competent enemy sworn to invade you. One that is building and equipping its military as fast as it can with this one thing in mind. One with the most invasive and brazen state hacking machinery on the planet. Thus it was a very bad day indeed when Taiwan’s entire bullet train system was disabled for nearly an hour by an unknown attacker. It got even worse when that attacker turned out not to be the implacable and hyper-resourced state actor over the Taiwan Strait, but a university student with a yen for radio and some kit he bought online. On the one hand, it’s good to see the good repair of the grand tradition of young hackers causing havoc from their bedrooms. On the other, WTRF? The information released by the Taiwanese authorities is scant on details, but enough to be pretty sure what actually happened. It’s bad news not just for Taiwan but for more than 100 countries that also use the TETRA two-way radio standard involved, often for emergency services. In many cases, it was the default replacement for unencrypted FM two-way radios, adding encryption, flexibility and network security. These were state of the art when TETRA was developed in the 1980s and 1990s — and work as well in 2026 as you might expect. Oops. There have been upgrades and, especially after the 2023 vulnerability disclosures, an accelerated program of making things better. A lot of the installed base globally is old, lacks over-the-air updates for security, and in any case spending money on new radios is normally at the bottom of the list for any state or public service organizations. Things have to get really bad first. Perhaps they just have. (North America is the only region where TETRA is uncommon, as it isn’t approved for public service use. This was either acute foresight or the fact that the TE in TETRA, now officially TErrestrial, used to stand for Trans-Europe. The American system, P.25, has never, however, been renamed Freedom Frequencies. Now on with the show) The network vulnerabilities are one side of the story. Our doughty hacker is the other. Reportedly, he didn’t have any TETRA hardware, but a laptop connected to a radio and an ‘SDR filter’. The latter makes little sense, it is far more likely that he had a software defined radio (SDR) called a HackRF. There are plenty of other devices that could have been used, but the HackRF is the weapon of choice for the gung-ho radio nut. SDR is a technique that has completely changed the rules of how to radio. All radios before it had to be entirely or mostly analog, with precision hardware dedicated to whatever job each radio had to do. This hardware could also be looked at as an analog computer, as it can be modelled as a set of mathematical transformations on the received signal. Analog computers have their place, just not in the 21st century. SDR is radio as digital computer. At heart, it has three components: an analog to digital converter to turn the incoming signal to a stream of numbers, very fast processing to do the radio math, and a digital to analogue converter to play the results. What you get is triply terrific. Digital processing is perfect, analog processing adds noise and distortion. Nothing is fixed, everything can be re-engineered with new code. And it can be hog-whimperingly cheap. HackRF is all those things and more. It can be configured as a portable touch-screen device. It transmits and receives from DC to daylight. You can pick one up for less than the price of a mid-range mobile. It is open source. It works with all manner of SDR creation tools, utilities and radio packages. There are infinite legitimate uses. Most excitingly, you can download apps for it that do everything, most especially the kind of thing that will introduce you with surprisingly rapidity to a wide range of new friends with no sense of humor and love letters that look suspiciously like arrest warrants. Think of it as speed dating but with more guns and less no thank yous, GPS spoofing, aviation and marine location transponders, satellite comms, data eavesdropping and injection - take your pick. You’ll need it to unlock the cell door. It is the data detection and injection that seems to have been the downfall of all concerned. A handset had its transmission decoded, and the result was retransmitted into the system as if it were that original radio. Whether the decoded data already had the General Alarm set, or whether the data had to be modified before retransmission, is not yet known. Doesn’t matter. It’s called a replay attack, and it has and is mostly used in stand-alone devices called code grabbers to unlock and steal expensive cars with wireless keys. Some countries, including Canada and the UK, have banned code grabbers, but this has failed on two counts. Code grabbers are small gadgets that can be bought online from China, and good luck policing that. Also, thieves are notably indifferent to laws. That notwithstanding, the UK is thinking of extending the ban to other classes of naughty wireless, and would doubtless like to do the same with HackRF, at least as of last week. Of course, they can’t be banned. SDRs can’t be banned as a class, especially open source ones made out of standard chips and open code. They are general purpose computers, albeit with specialisms. It doesn’t matter if you’re dismayed or delighted that things like HackRF exist, that genie is out of the bottle. What is truly dismaying is that replay attacks are a solved problem, trivially so. Choose a big keyspace, randomize and never repeat keys. That one is on lazy car makers and, apparently, the world of TETRA. Fixing that class of lazy, outdated security vulnerability will be very expensive. Embedded systems are like that, especially old ones. Not fixing this will be a gamble with infinite downside, in a world where electronic warfare systems that used to cost hundreds of millions now pour out of Ali Express for a few bucks. HackRF is to Tetra like Crocodile Dundee’s knife is to the mugger’s. Critical infrastructure resilience. Just three little words, but if you say them you better mean it. And it won’t be cheap. ®
Categories: Linux fréttir

Who, Me? Lab worker built a fake PC to nuke his lunch

Mon, 2026-05-11 07:30
WHO, ME? Welcome to a fresh, tasty, instalment of Who, Me? It’s The Register’s reader-contributed column in which readers confess about things they did at work that probably deserve to remain a secret. This week, meet a reader we’ll Regomize as “Ray” who told us he once worked in a research lab repairing nucleonic instruments. We understand they’re gadgets that use very short half-life isotopes that emit just enough radiation it’s possible to measure the backscatter. According to the World Nuclear Association this is helpful to measure the level of coal in a hopper, or the thickness of paper! Like many workplaces, the lab Ray worked in had a microwave oven staff could use to warm their lunches, and a coffee machine too. The difference in this lab was that the appliances lived next to a sink used to wash the nucleonic kit. Ray’s manager decided that posed a risk to workers’ health – which it didn’t – so insisted the Microwave and coffee machine go elsewhere. Ray’s solution was to screenshot his PC’s desktop, print it onto A3 paper, and laminate it. “The screen looked very realistic without requiring a backlight,” he said. So Ray moved it into an unused office and put a keyboard and mouse in front of it. He also found the coffee machine a new home where the manager wouldn’t go looking. “They were both still in use when I retired three years later,” he told Who, Me? Have you found a way to defy the boss and got away with it? If so, click here to send us an email. We’d love the chance to expose readers to your story! ®
Categories: Linux fréttir

Sovereign cloud is only possible if you’re Chinese or American: Gartner

Mon, 2026-05-11 05:31
It’s not possible to operate a completely sovereign cloud outside of China or the USA, according to Douglas Toombs, a VP analyst at Gartner. Speaking at the analyst firm’s IT Infrastructure, Operations & Cloud Strategies Conference in Sydney today, Toombs said only the US and China make all the tech needed for a sovereign cloud. Buyers elsewhere can’t avoid relationships with foreign providers. Toombs said that while US-based cloud vendors have created products they say can meet the needs of organizations that need a cloud that doesn’t have legal entanglements outside their chosen jurisdiction, the fact they’re ultimately owned by American corporations means it’s not possible to be certain a cloud provider can promise complete sovereignty. Even on-prem clouds like AWS Outposts, Azure Local, or Oracle’s Dedicated Cloud Regions, “need to phone home,” he said. The analyst doesn’t think attempts to create sovereign clouds will succeed. He mentioned past French attempts to create sovereign clouds named “Andromeda”, “Numergy,” and “Gaia-X”, which he says went nowhere - but did produce some nice white papers. He also cited the The Rule of Three and Four, a maxim developed by Boston Consulting Group that asserts “A stable competitive market never has more than three significant competitors, the largest of which has no more than four times the market share of the smallest,” and argued that it predicts the cloud market has settled around AWS, Google, and Microsoft. Toombs allowed that some smaller clouds could thrive and will make it feasible to create sovereign SaaS providers and products. But he thinks that even aggressive moves to go on-prem won’t free organizations from dependency on US-owned clouds, an assertion he backed with the example of a Dutch healthcare provider that tried to build its own infrastructure but then experienced an outage when a supplier’s services went down along with a major cloud provider. If sovereign clouds fail to develop, it may be problematic because some European organizations are worried US-based cloud operators might leave the continent, forcing them into hasty and risky migration projects, according to Adrian Wong, a Gartner Director Analyst who also spoke in Sydney today. Wong said “heightened geopolitical tensions” are causing customers of major clouds to rethink their strategies, a decision he welcomes because he sees very few organizations bother to develop a cloud exit strategy. “Exit plans are overlooked,” he said, and users are “very much locked in” – especially when they use cloud-native services or platform-as-a-service. “Exiting within a timeframe of anything less than two years takes significant planning and investment,” he warned. “Exit strategies and plans are largely swept under the rug.” Wong says he is now seeing “the pendulum swing.” Not developing a cloud exit strategy is one of the ten big mistakes Wong sees users make. Also on his list are starting use of clouds with mission-critical and complex applications like ERP, assuming the cloud is appropriate for all applications, and expecting to get all the benefits of the cloud with every application. He also said it’s folly to assume that going multi-cloud will improve availability – unless users first tackle the more complex and expensive task of making applications portable. Wong said organizations that use multiple clouds should do so to access specific features of each, not to improve resilience. ®
Categories: Linux fréttir

China’s agentic AI policy wants to keep humans in the loop

Mon, 2026-05-11 00:49
China’s Cyberspace Administration last week published draft regulations governing the behavior of AI agents and suggested humans should always retain the ability to review decisions taken by software. The draft expresses Beijing’s enthusiasm for AI agents with a call for efforts to develop datasets that accelerate development, along with security standards that make agents safe to use and ensure they behave ethically. There’s also a call to develop mandatory standards for how agents will behave “in fields such as healthcare, transportation, media, and public safety.” China also wants to participate in international fora that develop such standards. The draft calls for developers of AI agents to “clarify the reasonable boundaries and required authority for various decision-making methods, such as decisions limited to the user, decisions requiring user authorization, and autonomous decisions by the intelligent agent.” Those boundaries should “Ensure that users have the right to know and the final decision-making power regarding the autonomous decisions made by the intelligent agent, and that the intelligent agent's actions do not exceed the scope authorized by the user.” The draft identifies many tasks Beijing thinks agents might take on, including marking homework, analyzing medical images, evaluating employee performance and recommending promotions, helping disaster relief efforts, and even providing “intelligent management of the entire bidding and tendering process, ensuring standardization and efficiency throughout.” Samsung turns off its TV and appliance business in China Korean giant Samsung last week decided to quit China’s TV and appliance markets. “In response to the rapidly changing market environment, after careful consideration, Samsung Electronics has decided to cease sales of all home appliances, including televisions and monitors, in the Chinese mainland market,” states an “adjustment notice” on the Samsung China website. Samsung will honor warranties, and continue to provide after-sales service. The company hasn’t said why it’s quitting these markets in China. The Register expects the reasons have a lot to do with the rise and rise of Chinese consumer electronics companies, which can make a patriotic pitch in addition to pointing out the high quality of their products. Samsung’s not the first to decide it’s too tough to try trading televisions in China: Sony quit the country, too. Thailand approves giant TikTok datacenter The government of Thailand last week approved TikTok’s plan to spend ฿842 billion ($25 billion) on new datacenters in the country. Thailand’s Board of Investment said the project will see TikTok “install additional servers and expand data storage and processing infrastructure across Bangkok, Samut Prakan and Chachoengsao Province, supporting rising demand for digital services and strengthening Thailand’s role in regional digital infrastructure.” The Board also signed off on a 200 MW datacenter to be built by Skyline Data Center and Cloud Services Co, and a 134 MW facility from Bridge Data Centres. Baidu to float its chip biz Chinese web giant Baidu has filed paperwork to spin out its chip design business Kunlunxin. Baidu flagged its plan to do this in January, when it said the aim was to “independently showcase Kunlunxin's value, attract investors focused on the AI chip sector, and leverage its standalone listing to enhance its market profile, broaden financing channels, and better align management accountability with performance.” “This also supports the effort to unlock the value of Baidu's AI-powered businesses.” Kunlunxin’s chips suit inferencing and training workloads, but their performance can’t match Nvidia’s latest chips – or even four-year-old kit like the H100. That hasn’t stopped Baidu using the chips to power its own AI services, and major Chinese corporations also use the company’s chips. Japan and EU to improve tech interoperability The EU-Japan Digital Partnership Council recently convened its annual meeting and last week revealed that talks included “deepened discussions on the joint development and interoperability of data spaces” and promised to keep talking in a new “Data Strategy Working Group” that will “improve the interoperability of data policy frameworks.” The meeting also discussed a successful pilot on interoperable digital identities which apparently “showed that cross-border use is technically possible, even where governance frameworks and technical architectures differ. Using prototypes of digital identity wallets, the project demonstrated how interoperability can be achieved in practice between different systems.” As part of discussions, the EU and Japan agreed to begin working in new areas, including video games and audiovisual strategies. Humanoid robot becomes Buddhist monk Seoul’s Jogye Temple last week allowed a robot named Gabi to take the vows required of a Buddhist monk. Temple leaders reportedly decided to initiate the robot because they feel humanoid machines will soon become a part of everyday life. In February, the President of the Jogye Order, the Most Venerable Jinwoo, said “our lives have become ever more convenient thanks to cutting-edge science and AI. Yet the anxieties, anger, depression, and isolation—mental attachments and sufferings that science cannot resolve— are growing ever deeper.” “This does not mean that Buddhism withdraws from this vast technological civilization,” he said. “Rather, we aim to fearlessly lead the AI era and redirect its achievements toward the path of attaining peace of mind and enlightenment.” “In the age of AI and quantum science, peace of mind will be cultivated through Buddhism.” ®
Categories: Linux fréttir

Yes, local LLMs are ready to ease the compute strain

Sun, 2026-05-10 23:00
KETTLE We've been experimenting with LLMs for a while here at The Register, and if you ask our systems editor Tobias Mann and senior reporter Tom Claburn, locally installed coding assistants have actually become so good they could relieve some of the compute load that's pushing AI companies to raise their prices. This week on The Kettle, host Brandon Vigliarolo is joined by Mann and Claburn to discuss their work with locally-hosted LLMs, why we're revisiting the topic at all, how to do local LLMs safely, and whether there's orbital relief coming for the compute crunch. You can listen to The Kettle here, as well as on Spotify and Apple Music, or read the full transcript of this episode below. ® --- Brandon (00:01) Welcome back to another episode of The Register's Kettle podcast. I'm Reg reporter Brandon Vigliarolo and with me this week are systems editor Tobias Mann and senior reporter Tom Claburn to talk about some experiments they've been doing with AI coding assistants, but not just any AI coding assistant mind you, we're talking about local ones that live right on your own machine. Guys, thanks for joining me this week. Tobias Mann (00:24) Good to be here. Thomas Claburn (00:25) Thank you. Brandon (00:29) So before we jump into what learned during these experiments and how effective local large language models actually are as coding assistants. Let's talk a bit about why we're having this discussion in the first place. And I understand that AI coding assistants are about to become way more expensive. And I think, Tom, these were stories that you wrote recently. So can you walk us through a bit what's going on with the current cloud-hosted ones? Thomas Claburn (00:52) Back in November, there was, I think around Opus 4.5, pretty much all the developers started to realize that these models were actually getting pretty good and there's no longer, vibe coding was less of a joke and more like, you know, maybe this will work. And then by the time, you know, around February with the OpenClaw craze, was a lot more demand for sort of coding agents and people would start running these for long periods of time. And it sort of caught Anthropic and others unaware, Google and open AI as well. There was a lot of capacity constraints, a lot more people were trying these things out and they ended up having to find ways to limit demand through session limits and made a lot of people unhappy but they basically just didn't have the compute available to serve capacity. And on top of that, they're serving a lot of these at a price that is loss-leading. They're trying to get people into the business, but these are unprofitable workloads for them. And if you look at something like Mythos, which came out, is their big security model, it was too good for anybody, but large companies with expensive payrolls to run. Brandon (02:08) Right, right. Thomas Claburn (02:10) It's clear that they're looking for ways to increase their revenue because they're investing a lot in the infrastructure to make this run, but they don't yet have the recurring revenue that justifies all this. The ramps look good. They're bringing more people on, but they invested a lot of money in this. Brandon (02:29) OpenAI famously has never actually turned a profit in its history. I don't know about Anthropic ⁓ personally, but I can't imagine they're doing a whole lot better. And so I understand the two specific examples you had was that Anthropic recently yanked Claude Code from Pro plans, but only for some people. Is that correct? Thomas Claburn (02:49) Yeah and they wrote that off as an A/B test. Basically they were doing live A/B testing and people noticed and they were saying, oh, well, no, that's doesn't apply to everyone. We're not going to change or take away from existing Pro users. But clearly there are someone there saying, hey, can we get away with charging this much but providing less service? And that doesn't happen unless you're trying to figure out a way to increase your revenue and reduce the demand on your services. Brandon (02:53) Okay. Totally. Did they backtrack on that at all or is that still, is that A/B test still going on? Thomas Claburn (03:23) I don't think it's still going. Tobias Mann (03:24) They do really do do a lot of A/B testing. I think I have a Claude Code Max subscription that is about, has a 50 % discount on it right now. So I'm a little hesitant to give it up because yeah, it's a hundred bucks a month and I don't use it nearly enough to justify that. But also if I cancel and decide I wanted it back, it'd be 200. Brandon (03:46) Yes, the reason I'm still an Nvidia GeForce Now gaming cloud subscriber, right? Because I was there in the beta test and I've never given that discount up, even if I haven't used it in a while. So I understand. Claude did that, Anthropic did that, and then GitHub also has just straight up jumped to metered billing for AI, think. Correct? Thomas Claburn (04:05) Yeah, and they were taking a huge loss on things because they would give you a flat rate, but then people would use the most expensive models. And of course, those things are billed at different rates and offering a flat rate versus these very inflated Opus 4.7 models, which also take a lot longer to process stuff, even if they're a little bit more efficient, they'll think for longer periods. It's just they're losing money. So everyone has to go to meter billing. And once that happens, it's going to cost people a lot of money. You can look at it now, even on a subscription plan, you'll write up a little widget and you look at the thing and it's, you know, $2 worth of whatever. You think, well, is that worth it? Maybe. And then if it's a more substantial project, you know, people spend, you know, hundreds of thousands of dollars on stuff. And if that's not returning you any revenue, are you still going to do that? So it's going to be interesting to see how this goes. Brandon (04:59) Maybe local LLMs like what we're here to talk about today are kind of the market control, right? I'm sure there are gonna be people who are using these paid services, or were at least, that are gonna say, I don't care what the justification is, whether they're trying to make more money, sure, they might deserve to, or whether they just need to reserve compute resources. Either way, I can't afford to pay for this, so I'm going local. Maybe that'll be the cost control, right? Maybe there'll be some balance that kind of equals out there between, we're losing customers, so we got to make this cheaper versus we need to actually get some return on our investment someday. But I guess either way, right, this discussion is kind of is indicative of why we're talking about using local LLMs. Specifically, I believe, coding assistants, which is what the two of you have been kind of spending some time working with. And I understand you've both had success in various ways with this. Let's talk a bit about I guess the one large story you wrote this week about local LLMs and just kind of more broadly what you guys think of them. Tobias Mann (06:05) Many of us on the team have been playing with local LLMs in some shape or fashion for a couple of years now. And probably within the last year, certainly in the last six months, the models that are small enough that you can run on consumer hardware – and I'm not talking cheap consumer hardware, I'm talking about high-end consumer GPUs, quasi-workstation mini PCs, higher-end MacBooks and Macs – the quality of those models have jumped from being kind of like toys, tech demonstrator,s to being really rather competent. At the same time, we've also seen the rise of these agentic coding frameworks. That's the other part of the equation. These are things like Claude Code . Claude Code is a framework thatconnects to models running in Anthropic's various data centers and cloud providers, and is what's actually orchestrating the generation of the code, the testing of the code, the validation of the code, and allowing developers to kind of use these as actually useful tools rather than just getting a code snippet that may or may not work out of a model as you might have done with ChatGPT four years ago. Right around the time that Microsoft was going to usage-based billing and Anthropic was toying around with kicking the $20 a month Pro users off of the Claude Code entirely to save on compute, Alibaba's Qwen team popped in with a relatively small 27 billion parameter LLM Brandon (08:05) Relatively small. I just think it's funny how quick the parameters have grown over the years. Tt's small. It's only a few billion, you know. Tobias Mann (08:08) Yeah,it's only 27 billion. You know, they popped in and they presented this as being frontier-quality coding out of a pretty small model. And so with all of the harnesses you need to do this and now a model that is supposedly competent, it was just kind of the perfect storm so to speak, to start looking into whether or not these small models could be a replacement for some part of the development flow, for the entire development flow. And it's surprising just how good these small models have gotten. Thomas Claburn (08:53) I was experimenting just recently with the Qwen 3.6 and it's like a, whatever, 35 billion parameters ... but it's like a mixture of experts, so it's actually only like 3 billion, I think, when it's running. And it's an 8-bit quantization. And it's actually, it's working pretty speedily. And I was doing a sort of comparison test to see whether it would do a drag-and-drop metadata removal app on a map, which is like a very particular kind of thing. And initially it kind of suggested some things that were wrong. And I sort of cross-checked that with Claude OpenAI and they both came up with things that were like not really right either and then when I sort of rephrased the question to it more carefully, they basically came up with the same answer with Claude. And what it tells me is to your point about the harnesses, I think a lot of the things that makes local coding work is how good the local harness is. And this was a point that came up yesterday in a piece I was working on about Mozilla when they were talking about all the bugs they fixed with Mythos. One of the people I was talking to, Davi Ottenheimer, argued pretty strongly that you can do Mythos-quality work with a much smaller model as long as you have a good harness. Unfortunately, a lot of the setup of that is very kind of...there's not a standard way to do it. So people will either figure out a way that makes it work or they'll set something up and it just doesn't work. But it's not really clear why that happens. And there's a lot of just sort of arcana about like what skills you have and what the pipeline looks like. People are still figuring it out. But I think that local is where it will go because there's nothing that beats the price of being able to run this for next to nothing excluding your very expensive hardware. Brandon (10:59) And it's improving to the point where it's not something that it would have like a while ago, was like, this doesn't really work. Now we're reaching the point where these local models are viable, right? Well, like you said, you've got to word things carefully. I mean, that feels like anything that was the early days of AI, right? It's like, OK, you got to word it carefully. But eventually, it's going to get better to the point where it's not going to have to be so particular. And you get the same results, hopefully. Tobias Mann (11:24) Yeah, there are two key technologies that I think have really helped these smaller models compete. The first is, as Tom mentioned, is mixture-of-expert models. They only use a subset of the total parameter count for each token generated, which reduces the barrier to entry for hardware. The larger the models get, the more memory bandwidth you need in a consumer or even workstation class of product. It gets absurdly expensive as your memory bandwidth requirements increase. Brandon (12:01) Even for doing some of the basic ones here, I think you wrote in your story that the things you need, you need an M5 Mac with 32 gigabytes of memory. Or 24 gigabytes with multiple GPUs. You need a beefy machine from a consumer perspective to run this stuff. I've got an M1 Mac I wonder if I could run some of these. My Mac's pretty fast. I haven't needed to think about upgrading it in several years. And I looked at them and there's no way. Tobias Mann (12:29) So older Macs can do it. You will run into issues where the prompt processing side of it, that's the, hit enter on your prompt and then you wait. It gets to be problematic. Like you're talking several minutes of waiting for it to start generating a response because older Macs lacked the matmul acceleration necessary for this. So they were brute-forcing a lot of the compute on the GPU. Starting with the M5 Max, they integrated the matmul acceleration into the GPU. It makes a huge, huge difference in terms of performance. That's why we recommended newer Macs. Tom and I, I think we're both testing on older M Series Macs. Yes, it can work and especially with the 35 billion parameter mixture-of-experts model, it's a little bit better, but the quality is generally worse than the dense 27 billion parameter model. Brandon (13:39) I guess I can understand that, right? mean, the more processing you can get done the faster, the better the response is. Tobias Mann (13:48) That's a really important part of this because the other piece, the thing that has changed that the models are that small models can be this competitive is something called test time scaling. We saw this first with DeepSeek and OpenAI o1, which is this, you you hit enter on your prompt and then you see the model thinking and the model can work through different paths and then choose which path it wants to present tto the user at the end. So you can, the idea behind test time scaling is that you can take a smaller model and have it think for longer in order to make up for the lack of parameters in that model. And so we have both of those things coming together in models like Qwen 3.6 27B or Qwen 3.6 35B. Brandon (14:30) Okay, cool. Now, I mean, for those who are interested in setting this up and go, okay, I've got some hardware that's beefy enough and I think I'm willing to give this a shot. This has also gotten a lot easier. I think in the past year, year and a half, two years, it's also gotten multiple factors of simplicity easier to actually set up one of these things and run them locally. Is that accurate to say? It seems like it's gotten a lot simpler to configure this. Thomas Claburn (15:10) People often use Ollama or Unsloth, I'm using OMLX, which uses the Mac MLX. And these are basically the model serving platforms. You can get your model from a variety of places. Hugging Face is a very common one. But a lot of the model platforms like Ollama will fetch the model for you and handle all the installation stuff. The trick is a lot of them have different formats. And if you're using Olamma CCP directly on your computer, which is the C-based model runner, it's going to have a different format than say something else. And they'll all talk to each other, but it tends to lock you into one particular way of doing it and you get used to it. There's not really a right way of doing it right now and that's part of the problem is everyone's kind of figuring out what's the right way to do this? Which one do I want to use, how do I configure it? Even just looking at the model and trying to decipher the quantization and the features it has, isn't always clear to everybody. That I think hopefully will become more standardized as you get sort of more common knowledge about, yeah, this one works really well for me. Throughout the forums every week there's someone saying, yeah, this model is great for XYZ and we'll try that out. I mean, that's really the experience you have to have is figure out what you're gonna use it for and try it and see what other people are doing. And you can probably arrive at something that's useful locally. Brandon (16:45) Useful locally, I guess, also implies the need to do some security legwork. Right. I know when we first started writing about local LLMs, things like OpenClaw right. I mean, the the going headline for any of those right was, this this local LLM has caused chaos for somebody again. Right. Is that I think, Tom, you wrote a couple of stories recently about running local LLMs safely. Has it gotten to the point where it's easier to do that safely or is that still going to be a big concern for anyone doing this? Thomas Claburn (17:17) It is easier to do. The setup can be pretty complicated for these anyway. I just spent an evening building a sandbox for the Py agent because Py is sort of a very permissive agent that comes out of the box in YOLO mode. It can sort of do anything. It has very limited command set, but it has very few limitations. And that's by design. It's sort of like in the same way Flask is a very open Python framework. It's not this sort of "batteries included" thing, know, compared to Django. Something like Claude will come with a bunch of sort of predefined ways to do things. Claude has its own sort of sandboxing system and you can add a lot of safety through things like hooks. You know, there people who will write hooks that will intercept dangerous commands like, you know, rm. So there's a lot of ways to do it. Docker has a sandboxing system. That's what I tried to build on is basically figure out a way to do a Docker sandbox that runs Py and it protects the local file system but leaves the internet space open and those are kind of the security decisions you have to make because if this thing is totally enclosed in a VM and there's no way out, it can't really do anything! I mean you can do anything that you stick in the VM, but if you wanted to work on a project on your own system, you have to break that boundary somehow to get the file across and give it access, and then if you need to update something you have to open it up to a code repo somewhere. So there are a lot of security decisions you have to make and for me biggest one was just like making sure it doesn't mess with my local files and that gives me little bit more confidence to run a model that I don't really know how well it will perform. Having my Claude for a long time I'm a little bit more confident that it behaved behaves well, but the risk is there for all of them. Tobias Mann (19:10) So we looked at, I think, three different agent harnesses in the piece. Claude Code, which you would think is for work with Anthropic's stuff, but it works just fine with local models. It's two additional commands and you're up and running. It's very heavy. The system prompt is enormous. And so if you have lesser hardware, you might struggle a little bit with it. We also looked at Cline, which is a VS code extension that is very easy to install, pretty fast to configure. And then we looked at PyCodingAgent, which Tom had suggested that we discuss as well. Out of the box, Cloud Code and Cline both default to user-in-the loop, deny-by-default kind of situations where it'll ask for permission before performing any commands or writing any code. It'll say, "I want to write this code. What do you think? Do you want to proceed?" But they can be made to go fully automatic and just say, you know, I'm not worried, YOLO, let's go. And so thatmodel is a different security model than what we saw with PyCodingAgent, which to Tom's point is just pure YOLO mode out of the box. And so the security models differ wildly depending on which agent harness you're using or which sandbox that you're trying to play in, so to speak. There are several kind of agent sandboxes that have emerged that default to blocking all outbound network activity, which really limits the capabilities of the agent and forces you to be deliberate about what you do and don't want it talking to. Others are just, you know, they're focused on isolating doing kind of limiting the blast radius if the agent decides to go AWOL and do rm, rf, you know, the root file structure and just take the whole thing out. That's fine if it's in the container and it destroys the container because you run two commands and you're back up and running again. It's less okay if you're running bare metal. Brandon (21:32) So security considerations, seems like the core is basically just know what you're working with, right? Like don't deploy an agent that you don't at least have some idea how the security apparatus built into it functions by default, right? And just what you can do with it. But I guess whether we think about security or not, a lot of the conversation around the need to run LLMs locally seems to boil down to compute resources and the cost to maintain them, the cost to operate them, the cost to serve them. And I guess, Anthropic, speaking of Claude, right? Anthropic's big longshot this week, I guess, was a plan or a partnership they signed with SpaceX to occupy some space on the fleet of orbital data centers that Elon Musk seems intent on building. Tom, so is that gonna happen? Thomas Claburn (22:27) [Laughs.] I don't know. I I would think that they would put them in the ocean before they would put them in space. And, you know, they talk about data centers, but I think that it's I'll wait and see if they actually build them on land first, because there's a lot of terrestrial construction that is planned and hasn't happened. And we'll see. Tobias Mann (22:49) Yeah, the whole idea is that in space, you put the satellites in a sun-synchronous orbit, then they have basically unlimited power. The problem is that you have to get them there in the first place, which you need a launch vehicle for, which, last I checked, Starship still does not work. Brandon (23:09) I was gonna say this seems awfully familiar to me if we just change orbital data centers to Mars colonization, right? Like same problem here. We gotta have a vehicle that can get us there yet and we do not. Thomas Claburn (23:21) The Hyperloop will be the way they'll take it out there. Brandon (23:24) Yeah, right. Tobias Mann (23:28) And once we get the orbital cluster in place, Elon wants to put a mass driver on the moon so that we can put even more of these things into deep space for reasons I guess. Brandon (23:42) It just seems like there's a lot of, I don't know, it feels like the idea that Anthropic is gonna get on board with these SpaceX data centers in orbit. It feels to me a lot like when a data center company is like, hey, we just signed a huge deal with this company that makes nuclear reactors that don't exist yet. And it's kind of like, cool guys, well, let us know when we've actually got a real solution for the compute crisis that you guys are dealing with right now that you caused. Thomas Claburn (24:05) I kind of interpret the whole space thing as like, we made a deal with SpaceX and we have to say something nice about their future plans. Brandon (24:18) Right. Yeah. Tobias Mann (24:19) This really boils down to Anthropic is getting access to Colossus One, this massive, what, 150-megawatt AI factory, purpose-built for GPU training and inference. And so I think really what they need is compute and they cannot get enough of it. The inflection point has hit and we're seeing adoption, which means we need compute for inference and we need more compute for inference than we've had in the past. And so I think really what this is, is we'll say whatever you want. We will say that we will ride along on your Starship into the heavens and live in your space data centers. Just give us access to Colossus, please, because we're dying for compute. Brandon (25:15) We need it now and it'd be great if it happened someday in orbit, right? So in the meantime, I guess, basically, have we reached the point where localized AI, local LLM coding agents, right? Are we at the point now where they might be able to ease some of the compute stress that these companies are feeling or is this still early days something that's going to have to be developed, not worth it for the average developer? Thomas Claburn (25:41) I think they're going to be useful for sort of prototyping stuff. One of the things I've done is, I'll run it through the local one and then I'll have Claude check it. You often get a lot of, you know, code fixes that way. So it is a way to offload some less important jobs. I mean, you don't need a frontier model for everything. Brandon (25:49) Right. Right. I think that was kind an argument you made to bias about, you know, using a massive data center to build an HTML page is not a good use of resources. Tobias Mann (26:09) Right, Using the biggest, baddest model to write some HTML is probably not the most efficient thing to do, and it's certainly well within the capabilities of these small models. The other thing I'll say is, if you look at how GPT-5 works, if you go to ChatGPT, not Codex, when you first enter a prompt, it gets routed to one of three models based on the complexity of that model. Conceivably, we could do the same thing with local models, where you sign into Codex, it does a check. If you have sufficient hardware, it will run some portion of that query through the local model, do a yes/no check on the big model in the cloud, and decide whether or not, at that point, whether or not it needs to be regenerated via the API, or it can move forward with what's generated locally. So there's definitely a path forward for local playing a bigger role in reducing the amount of compute required to scale it.... Brandon (27:25) I guess the only key caveat there would be that if you're gonna install local LLMs on people's machines to split your compute load, you should probably let them know first, right Google? Tobias Mann (27:38) You probably should. Brandon (27:43) Probably. Or you can just do it and ask for forgiveness later on. Who's gonna uninstall Chrome? You? Ha ha ha. Tobias Mann (27:49) Yeah, the other thing I would point out is that, while a 24- or 32-gigabyte GPU is very expensive, we're talking anywhere from $1,00 to, you know, $4,000 plus for GPUs with that memory, those GPUs could serve that model to an entire team, realistically. And so if you were thinking about this from an enterprise adoption standpoint, you could buy one machine that sits in the corner, basically silent, that could serve an entire dev team with this smaller model. Or you could spend a whole lot more, but still something that fits on a desktop in the corner that runs a big model, like a trillion-parameter model, locally on that system and for that team. We're not just limited to these small models. You and I might be, but from an enterprise standpoint, a $70,000 DGX Station, for example, is capable of running very large models, trillion-parameter scale models. And that's less than the cost of one developer for a year. Brandon (29:06) Yeah, so maybe that's the case now, right? Maybe we've just reached a point where there's enough value in these local models as a sort of prototyping testbed, as a entry level dev replacement to do the first work before someone more experienced or with more parameters reviews it. Yeah, so it might be there. That's interesting. I will be interested to see how the evolution of AI models and like you said, the kind of linking between cloud-based versus local. I'll be interested to see how that develops. It could be the next phase of the AI industry's evolution. We'll see. We'll see. Something's got to give with compute, right? No matter what it is, we are going to be sure we're here on The Register to write about it and here at the kettle to talk about it. And until then, we will see you next week on the next episode.
Categories: Linux fréttir

Memory godboxes could offer relief from the RAMpocalypse

Sun, 2026-05-10 14:00
In modern datacenters, storage can live anywhere — local to the machine, remotely accessed over the network, and/or shared between systems. The next generation of servers will treat system memory in much the same way. Systems will still have some local DDR5, but the bulk of it will be remotely accessed from what some have taken to calling the memory godbox. The ongoing DRAM shortage has created a perfect storm for the proliferation of the appliances, which not only allow for memory to be pooled, but also data stored in that memory to be shared by multiple machines simultaneously. In effect, memory becomes a fungible resource. More importantly, your next round of servers will probably support the tech, if they don't already. CXL finally has its moment to shine The technology at the heart of these memory godboxes isn’t new. Compute Express Link (CXL) has been slowly gaining traction since its introduction seven years ago. As a quick refresher, CXL defines a common, cache-coherent interface for connecting CPUs, memory, accelerators, and other peripherals. The technology comes in a couple of different flavors: CXL.mem, CXL.cache, and CXL.io, which, as a whole, have implications for disaggregated compute. Imagine a rack with a CPU node, GPU node, memory node, and storage node, which can talk to one another completely independently. That's the core idea behind CXL. CXL piggybacks off the PCIe standard, which means in theory it should be broadly compatible, but, up to this point, it's primarily been used with memory devices. The 1.0 spec opened the door to memory expansion modules, which allow you to add more memory by slotting them into a CXL-compatible PCIe slot. To the operating system — assuming you’re running Linux that is — the extra memory is largely transparent, showing up as if it were attached to another CPU socket, just one without any additional compute. The 2.0 spec, which showed up in 2020, added basic support for switching, which meant memory could be pooled and then allocated to any number of connected systems. AMD and Intel’s current crop of Epycs and Xeons already support these appliances. But while the memory can be partitioned and reallocated to different machines as needed, two machines can’t work on the same data simultaneously. Unless you were memory-constrained, the added complexity of CXL 2.0 didn’t offer much benefit over simply using higher capacity DIMMs in the first place. At least, not until memory prices went through the roof. Where things really get interesting is when the 3.0 spec arrives in AMD and Intel’s next-generation of Epycs and Xeons. In fact, from what we understand, Amazon’s Graviton5 CPUs we looked at in December already support the spec. CXL 3.0 introduces two key capabilities that make it particularly interesting for memory appliances. The first is support for larger topologies: Multiple CXL switches can be stitched together into a fabric. The second is support for memory sharing: Rather than partitioning memory into slices only accessible to one machine at a time, memory can be shared between machines. In theory this could allow two machines running the same set of workloads to use the memory closer to that of one. It’s a bit like deduplication for memory. In fact, we already do this in virtualized environments like KVM, but it now works across machines. There are security and performance implications to all of this. Thankfully in CXL 3.1 and later, the consortium introduced confidential computing capabilities into the spec, allowing for isolation where necessary. On the performance end of things, CXL 3.0 moves to PCIe 6.0 as a baseline, which provides 16 GB/s of bidirectional bandwidth per lane. Assuming 64 lanes of CXL per CPU, that works out to an additional 512 GB/s of bandwidth. So memory bandwidth shouldn’t be too much of an issue for most applications. Latency, on the other hand, is a different story. CXL-attached memory is going to add some latency. However, as we’ve previously discussed, the latency isn’t as bad as you’re probably thinking — on the order of a NUMA hop, or about 170 to 250 nanoseconds of round trip latency. Obviously, the farther the memory appliance is from the host CPU, the worse the latency is going to be. Late last year, the CXL consortium ratified the 4.0 spec, which among other things doubles the bandwidth from 16 GB/s per lane to 32 GB/s by re-basing on PCIe 7.0. However, it'll be a while before we see appliances based on the spec. Where’s my memory godbox? There are several companies developing hardware for these kinds of networked memory appliances. Panmnesia’s CXL 3.2-compatible PanSwitch is one of the most sophisticated examples. The switch features 256 lanes of connectivity for CXL memory modules, devices, or CPUs to connect, pool, or share resources. If you’re okay with memory pooling and don’t need the niceties of CXL 3.0, then there are already several memory appliances available that are compatible with the latest generation of Xeon 6 and Epyc Turin processors. Liqid’s composable memory platform, for example, can provide a pool of up to 100 TB of DDR5 to as many as 32 hosts. Meanwhile, UnifabriX Max systems provide CXL 1.1 or 2.0 connectivity to 16 or more systems with support for CXL 3.2 already in the works. We suspect that as more CXL 3.0 compatible CPUs and GPUs hit the market, more of these memory godboxes will appear. AI eats everything Don’t get too excited. While network attached memory has the potential to reduce an enterprise's infrastructure spend, those same qualities make it attractive for the very thing driving the memory shortage in the first place. AI adoption has driven demand for DRAM off the charts. In addition to the HBM used by GPUs, DDR5 is being used for key value cache offload during inference. These KV caches store model state and can chew significant amounts of memory — often more than the model itself — in multi-tenant serving scenarios. Rather than discard these caches and recompile them when the model state is restored, it’s more efficient to offload them to system memory and eventually flash storage. The problem with using flash storage is that it has a finite write endurance. After a while it wears out. Instead, CXL memory vendors are positioning the tech as a more resilient alternative. That’s bad news for enterprises looking to these memory godboxes for salvation from the RAMpocalypse. ®
Categories: Linux fréttir

Both Fedora and Ubuntu will get AI support – soon

Sun, 2026-05-10 13:00
Both Ubuntu and Fedora have made it official: support is coming soon for running local generative AI instances. An epic and still-growing thread in the Fedora forums states one of the goals for the next version: the Fedora AI Developer Desktop Objective. It is causing some discontent, and at least one Fedora contributor, SUSE’s Fernando Mancera, has resigned. Fedora Project Lead Jef Spaleta, who took over the role from Matthew Miller a year ago, remains resolute, saying: I have zero evidence in front of me that users are being driven away from Fedora because of AI. As far as Red Hat’s community distribution goes, while this may be controversial, this should not be a big shock. In October last year, The Register reported that the Fedora council approved a policy allowing AI-assisted contributions, and anyone following the IBM subsidiary’s movements will already know that last June’s RHEL 10 release includes access to an LLM-based online helper chatbot: we tried it out when the product was released. We also reported on the managers of Red Hat’s Global Engineering department being notably keen on the use of AI just last month. Since Red Hat has other offerings for slow-moving stable server OSes – and arguably because Debian, Ubuntu, and their many derivatives have the stable-desktop-distro space nicely covered already – Fedora has a strong focus on providing a distro for developers, and Spaleta’s announcement makes this clear. The goal is: to build a thriving community around AI technologies by focusing on three key areas: equipping developers with the necessary platforms, libraries, and frameworks; ensuring users experience painless deployment and usage of AI applications; and establishing a space to showcase the work being done on Fedora, connecting developers with a wider audience. He also spells out what it doesn’t want to do: Non-goals: The system image will not be pre-configured with applications that inspect or monitor how users interact with the system or otherwise place user privacy at risk. Tools and applications included in the AI Desktop will not be pre-configured to connect to remote AI services. AI tools will not be added to Fedora’s existing system images, Editions, etc, by the AI Desktop initiative. In other words, tools for developers, not for end-users, with a strong emphasis on models that run locally, and which preserve the user's privacy. It’s also worth pointing out that Fedora has had an AI-Assisted Contributions Policy in place for six months, and earlier this month, Fedora community architect Justin Wheeler explained in some detail Why the Fedora AI-Assisted Contributions Policy Matters for Open Source. Our impression is that the Fedora team feels that it needs to keep Fedora relevant for growing interest in LLM-bot assisted tooling, and that it can address concerns from hardcore FOSS types by ensuring that this means local models, built according to FOSS-respecting terms, deployed in privacy-respecting ways. Fedora is not alone in this, though. There are also ructions across the border in Ubuntuland. Right after the release of the Canonical’s new LTS version, Ubuntu 26.04 Resolute Raccoon, Canonical’s veep of engineering Jon Seager laid out the future of AI in Ubuntu. We interviewed Seager last year during the 25.10 Ubuntu Summit, and back in January this year, he published his views on Developing with AI on Ubuntu. Now the plans are firming up. Like Fedora, there’s a strong focus on local models and confidential, privacy-first deployments – and ensuring that the OS and the tools support GPU acceleration from the big hardware players in that space. However, unlike Red Hat, Canonical isn’t pushing its developers towards these tools. In what we see as a veiled jab, Seager’s announcement says: We are not setting shallow metrics on token usage, or percentages of code written with AI, but rather incentivising engineers to experiment and understand where AI tools add value. Initially, the focus is on users instead: AI features in Ubuntu features will come in two forms: first as a means of enhancing existing OS functionality with AI models in the background, and latterly in the form of “AI native” features and workflows for those who want them. As Fernando Marcela’s exit shows, an emphasis on what could be termed FOSS-friendly AI – open models, privacy-centric, local execution and so on – is not enough to placate those who are really strongly averse to these tools. The Reg FOSS desk counts himself firmly in this camp. Back in January, we reported on the rise, fall, and resurrection of OpenSlopware, a list of FOSS projects which contain LLM-generated code, integrate LLMs, or even show the traces of the use of LLM agents. Soon, it seems inevitable that Fedora and Ubuntu will both feature here. Resistance, though, is also rising. Stop Slopware tries to help explain why and how to avoid it, and there’s also The No-AI Software Directory for projects that have explicit LLM-free policies, whether they’re FOSS or not. Bootnote It amuses us to note that both the Ubuntu and Fedora forums use the same software, called Discourse. (It’s a sort of web forum as designed by people who have heard of mailing lists, but don’t know how to use them and find the idea of bottom-posting confusing.) Some could interpret this shared adoption as a sign of underlying similarities between the two projects. ®
Categories: Linux fréttir

HP stuffed a PC into a keyboard. We took it for a spin

Sun, 2026-05-10 09:30
The early history of personal computers is stacked with systems such as the Apple II and the Commodore 64 that had the components living inside a keyboard. But as technology evolved, the keyboard became a peripheral and the PC itself was either in a separate box or the whole system was a laptop. Now, HP has a new spin on this decades-old idea. It embeds a full-fledged AI PC inside a 101-key keyboard you can carry with you from the office to home. Unlike ‘80s microcomputers or hobbyist-oriented products like the Raspberry Pi 500, the EliteBoard G1a is squarely targeted at business. The system is part of HP’s commercial lineup, alongside its EliteBook laptops, and, for better or worse, it comes with HP Wolf Security preinstalled. The company clearly hopes organizations will buy these in bulk. But to benefit from it, you really have to prefer a mobile keyboard to a traditional laptop, all money aside. Who’s it for? The EliteBoard G1a is trying to create a new niche. When we talked with product managers at HP, they suggested IT departments would buy these computers for two types of workers. The first group is so-called "dual deskers" - knowledge workers who have a desk with a monitor at work and another at home. The second group includes deep-pocketed call centers or environments where desk space is at a premium. From time immemorial, dual-deskers have carried laptops and closed their lids when they docked to a monitor at work. With the EliteBoard, they could simply schlep the keyboard, which weighs a mere 1.49 pounds – about half the weight of a lightweight laptop. To make this situation work in companies with managed systems, we have to assume that either the IT department would give out monitors to use at home or offer some reason (a subsidy? a mandate?) for employees to buy their own for home. The EliteBoard connects to monitors using its USB4 port, so its ideal monitor is one that has Thunderbolt or USB video connectivity built in. Less-expensive and older monitors don’t have this type of connectivity, but select configs of the EliteBoard come with an optional USB-to-HDMI adapter that you can use with other monitors, and it has a USB pass-through for power. That said, HP demonstrated the EliteBoard at numerous press events by showing how much desk space it saves by using a single USB cable to get power, video out, and connectivity to peripherals via the monitor. So if companies want employees to be able to take advantage of this scenario at home, that means shelling out another few hundred bucks for a modern monitor, or making employees do it. Today, companies with limited desk space for a call center or another cramped work area could just buy a tiny desktop to sit behind the monitor or next to it. However, building all of the PC’s guts into the keyboard makes a lot of sense for space savers, because a keyboard is something every PC needs and a desktop chassis is not. If a company wanted to, it could give each employee their own EliteBoard, have them plug it into a monitor during work time and then have them stick it in a drawer when they go off shift and someone else comes on. The problem for call centers is that the HP EliteBoard G1a is much more powerful and much more expensive than what they need. At press time, the G1a was priced at $1,499 for the lowest end config. And most companies probably don’t need employees to each have their own PC that they lock away after they punch out. “The call center angle is probably the stronger pitch, but those buyers are shopping entry-to-mid-market. They want something cheaper and simpler than a mini desktop, not a Copilot+ PC with up to 64GB of RAM,” Kieren Jessop, a research manager with analyst firm Omdia. “HP has built an impressive piece of engineering in search of a problem that most enterprises have already solved with a laptop — or will solve with a thin client.” Configurations HP makes the EliteBoard G1a in a variety of configurations that vary by market. Companies can get it with various AMD Ryzen CPUs, up to 64GB of RAM and an SSD up to 2TB in capacity. It comes with either a detachable or embedded cord, and optionally with a 32 WHr battery that promises up to 3.5 hours of endurance. Why would you need a battery on a product that demands to be used at a desk and plugged in? The most likely reason is to let the keyboard go into sleep mode when it’s in your bag. Employees could also hook the EliteBoard G1a up to a portable monitor and use it unplugged that way, but then why not just buy them a laptop? At press time, prices ranged from $1,499 to $3,423 in the US. The lowest-end config has a Ryzen AI 5 Pro 340, 16GB of RAM, an integrated cable, and a 256GB SSD. Fifty bucks more will get you the same configuration with a 512GB SSD, as per HP.com. The highest-end config listed comes with a Ryzen AI 7 Pro 350 CPU, 32GB of RAM, and a 512GB SSD, and sells for only $1,999 at B&H but a whopping $3,423 at HP.com. Our review config, which sports 64GB of RAM, a Ryzen AI 7 Pro 350 CPU, and a 2TB SSD, has not been listed for sale in the US, and HP didn't answer when we asked how much it would cost. However, we’d assume that it would cost a lot more than $1,999. Price vs a Laptop If all you do is dock your PC at home and at work, you might think, “why pay for a laptop when I don’t need a built-in screen?” But it’s hard to make that argument when the laptop is actually less expensive. Right now, you can get an HP EliteBook 6 G1aN with the same AMD Ryzen AI 7 350 CPU, along with 24GB of RAM and a 512GB SSD, for just $1,299 – that's actually less than the cheapest EliteBoard. A custom configured HP EliteBook 8 G1a with the Ryzen AI 7 Pro 350 CPU, 32GB of RAM, and a 512GB SSD is just $1,799. If you’re comparing the total cost of ownership versus a laptop, also consider the price of a monitor if your users don’t already have one. While you could use an adapter, the ideal use case involves a USB-C monitor that transmits data and power over a single wire. The cheapest HP-branded USB-C monitor I could find at press time was the HP E27k 4K monitor, which was selling for $504. However, I saw a Dell-branded USB-C monitor, the S2725DC, on sale for just $236 at Amazon. If you’re an IT department and you’re kitting out someone for home and office use, you might need to buy them two monitors. Design At 14.1 x 4.7 x 0.7 inches, the EliteBoard G1a is the size of a typical, full-size keyboard complete with numpad. It’s a boring but office-friendly dark gray color with a very thin bezel around the keys. At first glance, there aren’t many ways to know that this is more than just a keyboard. There’s a power button / fingerprint reader that’s located in the upper right corner of the keyboard, though you might easily mistake it for just another key, until you press it and see the blue light turn on. Turn the keyboard around and on the back lip and you’ll notice a thin vent for airflow. This computer definitely has a fan and you can hear it quite prominently at times. There are also two USB-C ports, a USB4 40 Gbps port and a 10 Gbps port, unless you have the embedded cable, in which case, you just have the 10 Gbps port. Clearly, the 40 Gbps port is the one you’ll want to use for docking, but you can use the 10 Gbps port to connect the dongle for the included wireless mouse or other peripherals. There’s also a security cable lock slot on the left side. So if you want to chain this to a desk, you can, but we’d argue that defeats the point of the machine. But how well does it type? Since this is a computer-in-a-keyboard, the most obvious question we need to answer is “how’s the typing experience?” Pretty decent. On the bright side, the EliteBoard G1a has a generous 2 mm of travel, which is more than you’ll find on most laptops, where even 1.5 mm is deep. The keys feel pretty snappy and are in the same feedback league as those on my Lenovo ThinkPad X1 Carbon, but the ThinkPad’s keys have a more curved shape, which is better than the flat tops on the EliteBoard. If you’re burning the midnight oil, there’s a built-in backlight which you can enable by hitting the F9 key. It has two different brightness settings so you can decide just how much you want it to shine through. The layout is pretty standard for a full-size keyboard with a numpad. However, I don’t like how small the arrow keys are, and the Pg Up and Pg Dn are just tiny. There’s no empty space around these keys, which I use a lot when editing documents, so it’s far too easy to miss them. Even on most laptops, these keys are larger. Another downer is the lack of flip-up feet on its bottom. I like to angle my keyboard up at a 15 to 30 degree angle, but this one is short and flat to the desk. To save my wrists, I always use a gel-filled wrist rest when I type and, without feet to elevate the keyboard, I’m typing down onto the keys because it’s so much lower than the gel pad. This won’t be as much of an issue for folks who don’t use wrist rests. In short, if you’re used to laptop keyboards or the low-cost keyboards that come with most desktop computers, the EliteBoard G1a will probably seem like a nice step up. However, if you want the best possible typing experience, there’s an entire ecosystem of mechanical keyboards out there with much deeper travel and more feedback. If you’re not a gamer and you want the best possible typing experience, I recommend a mechanical keyboard with either clicky or tactile switches. Unless you go for a low-profile keyboard, you’ll be getting between 3.6 and 4 mm of travel, so you won’t bottom out as easily when typing. I prefer clicky switches like the Kailh Box White (my favorite) or Cherry MX Blue, but those make some noise so, if you like quiet, Cherry MX Brown switches will do the trick. To see the difference between my daily driver mechanical keyboard, an Akko 3098N with Kailh Box White switches, and the EliteBoard G1a, I performed the 10fastfingers.com typing test on both. On HP’s keyboard, I managed a strong 96 wpm, which is at the lower end of typical for me, with a six percent error rate. On my daily driver, the numbers were a better 101 wpm with a two percent error rate. Your mileage will vary. Speaker and Microphone The EliteBoard G1a has both built-in bottom-facing speakers and a microphone array. In our tests, the speaker was more than loud enough and it was clear enough for voice calls, though we wouldn’t recommend listening to music on it for too long. The drums in AC/DC’s Back in Black sounded a little tinny, though there was a clear separation of sound with the vocals appearing to come from one side while the percussion came from another. The dual-array microphone was also passable, but not good enough for podcasts. When we talked to a coworker using the built-in mic, she said our voice was clearly audible but a little echoey. In the box and preloaded Depending on which config you get, your HP EliteBoard G1a may come with a variety of different accessories in the box. All versions come standard with an HP wireless 675M mouse that connects either by Bluetooth or by an included USB-C wireless 5-GHz dongle. It is not a particularly fancy mouse but it has a couple of side buttons and a scroll wheel. I found myself using my Logitech MX Master 3 mouse instead, because it’s ergonomically shaped and highly programmable. My review unit also came with the optional soft canvas cover sleeve you can use to protect the EliteBoard G1a while you’re carrying it around. I found this add-on to be about as useful as a laptop sleeve. It might offer some protection and padding for when you stick the EliteBoard G1a in an existing backpack, but it’s not going to replace your briefcase or your backpack when you’re commuting. I also got the optional HDMI multiport hub, which is a must-have if you don’t already have a Thunderbolt or USB4 docking station or a monitor with that kind of connectivity built in. The hub connects to the USB4 40 Gbps port on the EliteBoard and features two USB-C ports (one for power, one for connectivity), an HDMI out cable for connecting to a monitor, an Ethernet port for wired networking, and an HDMI-in port for a second monitor. There’s an optional, slim 65W USB-C power adapter that’s helpful if you aren’t connecting to a monitor or docking station that supplies power. If you don’t get one in the box, it’s easy enough to find one for $15 to $30 on Amazon. Also, if your EliteBoard does not have an embedded cable — mine did not — you get a braided USB cable in the box. The less-expensive configs of the EliteBoard all have embedded cables, but we recommend getting a model without one because it’s easier to carry around without a cable hanging off of it. HP does not preload a lot of software onto the EliteBoard but it does come with a three-year subscription to HP Wolf Security, which normally costs $36 a year for individual subscriptions. HP Wolf has a malware/virus scanner, a threat containment feature, a secure browser, OS resiliency (for recovering from corruption and doing a reinstall), and application persistence, which prevents unwanted changes to security software like HP Wolf itself. Since it has an NPU (neural processing unit) that’s capable of more than 45 trillion operations per second (TOPS), the EliteBoard G1a qualifies as one of Microsoft’s Copilot+ PCs. This means that it has some added local AI features that not every PC gets from Windows 11, including Cocreate image generation in Paint, Windows Studio Effects handled locally for your webcam, translated Live Captions from any audio input, and Recall, a controversial feature that takes screenshots of all your work to help you “remember” what you were doing at any given time. Fortunately, Recall is disabled by default. Performance Equipped with an AMD Ryzen AI 7 Pro 350 CPU, 64GB of RAM, and a 2TB SSD, our review configuration of the EliteBoard G1a handled everything I threw at it. I used the system on and off as my daily driver PC for work for a period of several weeks and it was always smooth and responsive, even as I had dozens of Chrome tabs open and Slack running across two 4K monitors I had connected via Thunderbolt 3 docking station. I should note that, no matter what I was doing, the fan on the EliteBoard G1a was frequently running and was often quite audible. It’s no louder than most notebooks I’ve tested, but if you’re expecting total quiet, look elsewhere. My editorial workload is not nearly as demanding as some folks’ day jobs so, to see how the EliteBoard G1a stacks up, I ran it through a series of benchmarks and compared the results to those from two laptops I had access to: a Lenovo Yoga Slim 7x with a Qualcomm Snapdragon X Elite X1E-78-100 CPU, and a Lenovo ThinkPad X1 Carbon with an Intel “Meteor Lake” Core Ultra 7 165U processor. The Ryzen AI 7 PRO in the EliteBoard debuted in 2025 with 8 cores, 16 threads, and a maximum boost clock of 5 GHz. It features built-in AMD Radeon 860M graphics and a Neural Processing Unit (NPU) that’s capable of achieving 50 TOPS for better local AI. Its DDR5 RAM runs at 5,600 MHz. Released in 2024, the Snapdragon X Elite X1E-78-100 has 12 cores and threads with a boost clock that goes up to 3.4 GHz, along with an NPU that does 45 TOPS. It’s an Arm processor so the laptop that runs it uses Windows on Arm. The Yoga Slim 7x laptop that we tested had 16 GB of LPDDR5x RAM running at 8448 MHz. The oldest of our test group, vintage 2023, the Intel Core Ultra 7 165U has 12 cores and 14 threads, but only two of those cores are performance cores that can boost up to 4.9 GHz, while the others are a mix of efficient cores and low-power efficient cores that boost up to 3.8 and 2.1 GHz respectively. The ThinkPad X1 Carbon we tested with it had 64GB of LPDDR5x RAM running at 6400 MHz. In our tests, the EliteBoard G1a always eclipsed the ThinkPad X1 Carbon, which is not a surprise considering its much-older processor. However, the Snapdragon-enabled Yoga Slim 7x outpaced it on some benchmarks. Primesieve This test counts the prime numbers under one trillion and returns a result in millions of prime numbers per second. The benchmark is particularly heavy on SIMD instructions like AVX-512 or Arm’s Neon and SVE vector extensions, making it a good proxy for some of the more workstation-centric tests we’ll look at shortly. It runs across both single thread and multi-thread workloads, with big performance boosts for parallel processing. Using just a single thread, the EliteBoard edged out the competition with 415 million primes per second (MPS), compared to the Slim 7x’s 352. However, the Slim 7x slightly outperformed it when using multithreading, delivering 2,686 MPS to the EliteBoard’s 2,145. One thing to note is that, while the EliteBoard has more threads, it has fewer actual cores. The X1 Carbon wasn’t even in the same ballpark. This will become a theme across our test suite. Blender 3D rendering is always a challenge and, to be honest, it’s hard to imagine somebody buying an EliteBoard for this purpose. However, it’s always worth noting what the system can do. We ran Blender, a very popular 3D modeling app, using three scenes: Monster, Junkshop, and Classroom. As you can see, the Slim 7x and its 12-core Snapdragon processor were anywhere from 34 to 75 percent quicker, depending on the content. Still, the EliteBoard turned in respectable scores on something you wouldn’t expect it to do. Handbrake x265 Video transcoding is another resource-intensive task and one that occurs in many scenarios, including game streaming, video editing, and even video conferencing. To test how the EliteBoard handled video transcoding, we used Handbrake to convert a 4K 60 fps video to 1080p using an x265 encoder at the medium preset with a constant quality of 18. Our results are measured in frames per second (fps). Again, the EliteBoard was far superior to the ThinkPad, but was a good 45 percent behind the Yoga Slim 7x. Still, this is solid performance that’s more than workable. Llama.cpp One local AI task you might want to conduct is running an open-source model as a chatbot on your PC rather than sourcing it from the cloud. This will give you more privacy than using OpenAI, Claude, or Copilot on the web and it’s completely free. So we ran the GPT-OSS 20B open weights model using Llama.cpp as our client and timed the amount of milliseconds it took to generate the first token. Here we see that the Snapdragon processor and faster RAM on the Yoga Slim 7x gave it a definite advantage, taking 39 percent less time than the EliteBoard to get there. The EliteBoard also generated about half as many tokens per second. However, it beat the pants off the ThinkPad X1 Carbon, getting to the first token more than twice as quickly while generating 30 percent more tokens per second. It’s worth noting that these tests were run on the CPU cores and didn’t harness the chip’s integrated GPUs or NPUs. Whisper.cpp One common local AI workload a business person might use is transcription. Let’s say you had an audio file and you wanted to convert it into readable and editable text. You might use a tool based on Whisper, a popular free model from OpenAI. For testing, we used Whisper.cpp, an implementation of Whisper written in C++, with the Whisper Medium EN model transcribing a 10-minute audio clip. Here, the EliteBoard transcribed the audio at 2.4x real-time speed, while the Yoga Slim 7x was faster at 3.4x. Those extra cores are doing a lot of heavy lifting here. That said, if you’re converting 10 minutes of audio in less than five minutes, that’s pretty good. LLVM Compile For those using the EliteBoard for programming, compile times matter. So, we compiled the LLVM toolchain from its source and measured the time. This isn’t a trivial compile job and therefore represents a worst case scenario for developers considering the EliteBoard. Here it took a modest 19 minutes and 44 seconds, which was more than double the time it took the Yoga Slim 7x. On high-end desktop workstation hardware, this same workload can be completed in under five minutes, so if your day job regularly requires compiling large projects, you might want to spring for something more capable, or perhaps not. “My code is compiling” is a pretty good excuse for taking a 20 minute break. 7-Zip Compression and decompression are very taxing on a CPU and are very common scenarios we see today. So we fire up 7zip and measure its ability to do both tasks in both single-threaded and multi-threaded scenarios. With a single thread, the Slim 7x and the EliteBoard basically tie at compression, while HP’s computer holds the edge in decompression. However, when we move to multi-threaded scenarios, the Snapdragon X Elite’s 12 physical cores easily beat out the AMD Ryzen AI 7 Pro 350’s eight cores and 16 threads. LibreOffice: ODT to PDF Conversion We tested how long it takes LibreOffice to convert 50 image-heavy ODT files into PDFs. This workload is lightly threaded so it favors higher clock speeds over more cores. The results bear this out as the EliteBoard, with its Ryzen AI 7’s higher performing cores, beat out the Slim 7x by 22 percent. Despite its older processor, the ThinkPad actually manages to tie the Slim 7x in this test. Repairability For IT departments that do their own service, the EliteBoard G1a has plenty to offer. Its back surface is held on by just four screws and pops off easily. Underneath, you get full access to the motherboard and a number of easily-removable components, including the DDR5 SODIMM RAM, the M.2 SSD, the WLAN card, the fan, the optional battery, and the speakers. You can even replace the keyboard itself and leave the computer part intact. Bottom line The HP EliteBoard G1a delivers strong performance in a unique and compact form factor that saves desk space and reduces the weight you carry back and forth. If you don’t want a laptop but do want a portable computer, this is your best choice. It provides a better typing experience than most laptops and a more space-efficient design than most desktops. However, in the current marketplace, this device does not represent a significant savings over a similarly configured laptop. Depending on what laptop you choose to compare against, you might save a few hundred dollars, but when you add the cost of the monitors you need to pair with it - if you need to purchase those - it’s a wash. HP has set out to make a unique product with the EliteBoard G1a and it has succeeded in building a very competent and capable computer-in-a-keyboard. If you’re an IT decision maker, you’d buy this device for folks who work out of one or two distinct locations (home and office or multiple offices) and never need to get online from the road or from a conference room. Whether that’s a common scenario in your workplace will determine if this product is right for you or your fleet. ®
Categories: Linux fréttir

Google tweaks Chrome AI privacy wording, insists processing stays on-device

Sat, 2026-05-09 17:57
Google has changed Chrome's disclosure language about how its on-device AI works, but that doesn't mean the company intends to capture on-device AI interactions. The Chrome menu modification, which isn't universally rolled out yet even in Chrome 148, was noted this week on Reddit. The "On-device AI" message in Chrome's System settings previously read, "To power features like scam detection, Chrome can use AI models that run directly on your device without sending your data to Google servers. When this is off, these features might not work." But the message changed recently – it lost the phrase "without sending your data to Google servers." That prompted privacy advocate Alexander Hanff to question whether the edit signaled an architectural change that would see local AI interactions processed by Google servers instead of remaining on-device. "Why was the sentence 'without sending your data to Google servers' removed from the on-device AI description in Chrome's Settings UI?" Hanff asked. "Was the previous text inaccurate? Has the architecture changed? Was the wording withdrawn on legal advice because Google was unwilling to defend it as a representation?" Asked about this, a Google spokesperson said, "This doesn’t reflect a change to how we handle on-device AI for Chrome. The data that is passed to the model is processed solely on device." It appears this situation deserves a more genteel rendering of Hanlon's Razor – "Never attribute to malice that which is adequately explained by stupidity." In this case, it's "Never attribute to malice that which is adequately explained by bad timing." Word of the menu modification surfaced as Chrome was rolling out the Prompt API, which is designed to provide web pages with a programmatic way to interact with a browser-resident AI model. The API's arrival and public discussion of it drew attention to the fact that Chrome has been silently downloading Google's 4GB Nano model onto users' devices. The coincidence of these events made it seem that Google was preparing to capture on-device prompts and responses, which would be a significant privacy retreat. In fact, Chrome has been letting Nano sleep on the couch for early adopters dating back two years when local AI was implemented in Chrome 126 as a preview program. While Google hasn't yet made model downloading and storage opt-in, the biz did earlier this year implement a way to deactivate and remove the space-hogging model. "We’ve offered Gemini Nano for Chrome since 2024 as a lightweight, on-device model," a Google spokesperson explained, pointing to relevant help documentation. "It powers important security capabilities like scam detection and developer APIs without sending your data to the cloud. While this requires some local space on the desktop to run, the model will automatically uninstall if the device is low on resources. In February, we began rolling out the ability for users to easily turn off and remove the model directly in Chrome settings. Once disabled, the model will no longer download or update." The edit to the "On-device AI" message occurred in early April. According to Google, Gemini Nano in Chrome processes all data on-device. But when websites interact with Gemini Nano in Chrome – via the Prompt API, for example – they can see the inputs and outputs of the model. In such cases, the data handling would fall under the privacy policy of the website interacting with the user's Nano instance. Google decided to change its "On-device AI" message to avoid confusion – and perhaps to preclude legal claims alleging policy violations – when the user is interacting with a Google site that calls out to the Nano model on-device, in support of some service it provides. In that scenario, the Google site would have access to the prompts it sends and responses it gets from the user's on-device model. That interaction would happen "without sending your data to Google servers," at least in the context of a user querying a model running in Google Cloud. But since the user's on-device Chrome-resident Nano model would send data to the Google site in response to that site's API calls, that data transmission might be interpreted as a violation of the local AI commitment language. Hence the edit. Google's decision to have Gemini Nano become a Chrome squatter is a novel way of doing things, given that co-opting people's computing resources has largely been the province of covert crypto-mining scripts. But perhaps after years of offering Gmail and Search at no monetary cost, Google feels entitled to a few gigabytes of Chrome users' local storage and occasional bursts of their on-device compute. ®
Categories: Linux fréttir

macOS 27 threatens to bury Time Capsule, FOSS brings a shovel

Sat, 2026-05-09 12:25
The next major release of macOS looks likely to remove Apple Filing Protocol (AFP) support, stopping Time Capsules from working… but life FOSS, uh, finds a way. The current version of macOS "Tahoe" 26.4 already has network Time Machine issues, especially for folks using Apple Time Capsules. It looks like macOS 27 may completely remove the network protocol they need. However, the Time Capsules run NetBSD under the hood, and that means that the FOSS world has been able to come up with a workaround. It's called TimeCapsuleSMB, and it aims to keep older Time Capsules usable with modern macOS. It's eight months since Apple released macOS 26, and the company's annual release schedule means that macOS 27 is looming. Although Cupertino hasn't told the world much about it yet, it is warning sysadmins to "prepare your network environment for stricter security requirements." Reading the bulletin, we found it rather clixby: while it firmly warns that security checks will become stricter, it doesn't spell out what products will change or how. Happily, there are elder Mac gurus out there who interpret Apple's sometimes Delphic utterances, and Howard Oakley is one of the greatest. In a post about networking changes coming in macOS 27, he translates that it will require TLS 1.2 or above. (The Register explained TLS back in 2002, and version 1.2 appeared about six years later.) However, he also warns that it could mean the end of AFP, which is basically Appletalk-over-TCP/IP version 3.4. AppleTalk was the Mac network protocol for file sharing from System 6 onward. In 2013, OS X 10.9 "Mavericks" made Microsoft's SMB the default file-sharing protocol in place of AFP, and it looks like AFP now faces the ax: it was officially deprecated in macOS 15.5. To be fair, macOS 26 Macs started displaying a warning to Time Capsule users nearly a year ago. Apple introduced the first model of Time Capsule in 2008, and the fifth-generation version in 2013. The company discontinued the whole AirPort product line in 2018. All generations only support AFP and SMB version 1. That’s the original version that appeared with LAN Manager in 1987, and we reported on Samba dropping SMB1 back in 2022. The good news is that even if Apple kills its original file-sharing protocol next year, the FOSS community is on the case and won't let working kit die. The Time Capsule hardware is essentially a box containing a Wi-Fi access point and a hard disk, and an Arm chip with just enough software to share that HDD as network-attached storage. Apple didn't write this software from scratch: it picked up and customized NetBSD for the job. The first four generations of Time Capsule (flat square boxes) run NetBSD 4, and the fifth-gen devices – the tall tower-shaped models from 2013 onward – run NetBSD 6. That gave Microsoft's James Chang an opening. Since the devices run NetBSD, it's possible to compile a newer version of Samba, and copy it somewhere that the tiny embedded Arm computer can find it. Teaching such old kit a new trick is never that easy, though, and he faced a number of challenges, which he details in the design section of the project README. Among them are machines that only have about 900 KB of available disk space – less than 1 MB – and a tiny 16 MB RAMdisk. He settled on Samba 4.8, which dates back to 2018, the same year Apple discontinued the product line, but which includes the necessary Time Machine support, via a module named vfs_fruit. The TimeCapsuleSMB docs are worth a read. We found his descriptions of how he worked around the hardware's very significant limitations impressive. Notably, on the early models, you'll need to manually reload the software every time you reboot the Time Capsule. The final model can do this automatically. Don't fret at the thought of backing up to such an elderly spinning hard disk: iFixit has descriptions of how to replace the drive in both the early models and the later ones too. ®
Categories: Linux fréttir

London’s BT Tower to get rooftop swimming pool

Sat, 2026-05-09 10:03
Visitors to London’s iconic Telecom Tower might soon be able to go for a rooftop swim, according to plans revealed by the developer turning the building into a hotel. The iconic 177 meter (581 ft) high structure in Fitzrovia in London’s West End was sold off by BT Group in 2024 to US-based hotel owner-operator MCR Hotels for £275 million ($346 million). At the time, the firm said it wanted to preserve the Grade II listed building, while converting it into a hostelry. Now, MCR has announced a small number of public consultation events it is holding on May 11, 12, and 16 where those interested can view the emerging proposals for the site, meet the project team, and share any feedback on the plans. Those proposals include public access to the top of the tower and its podium buildings for the first time in almost half a century. The 34th floor was famously home to a revolving restaurant that gave diners a panoramic view of Britain’s capital as it slowly turned once every 22 mins, but this was closed in 1980. Also part of the proposals are a new publicly accessible square plus retail shops and restaurants at ground level, and a rooftop swimming pool. London is home to a number of high-rise swimming venues already. There is the vertigo-inducing Sky Pool which spans two apartment buildings ten stories up at the Embassy Gardens development in the Nine Elms region of Wandsworth. You will find an infinity pool at the Shangri-La hotel on the 52nd-floor of the Shard building near London Bridge, and there is also a pool on the roof of the Berkeley Hotel, overlooking Knightsbridge. The BT Tower was originally known as the Post Office Tower when it was first built in 1964, and its main purpose was to support microwave antennas used to beam telecom signals between London and the rest of the country. The tower will not be turned into a vertical hotel immediately. BT said payment for the site is spread over six years to 2030, during which time the company will gradually remove all of its telecoms equipment from the building. As we reported previously, the BT Tower also famously fell victim to a giant kitten in an episode of the British 1970s TV comedy series The Goodies. ®
Categories: Linux fréttir

Pages