TheRegister
Dissatisfied: Three-fourths of AI customer service rollouts are a letdown
If you're thinking you can replace your human call center staff with a server farm of bots, think again. Nearly three-quarters of enterprises that deploy AI customer communications agents later roll them back or shut them down, according to new research suggesting the systems are far harder to manage reliably in production than the AI hype implied. Swedish comms-as-a-service firm Sinch surveyed more than 2,500 AI decision makers from various countries and industries for its AI Production Paradox study. The starkest finding is undoubtedly the 74 percent rollback or shutdown rate for deployed AI customer communications agents tied to governance failures, but that’s not the only sign enterprise AI deployments are falling short of expectations. AI rollback rates, which Sinch told us specifically refer to AI projects that were deployed and pulled from live service rather than projects that failed before launch, actually rise to 81 percent among organizations that it describes as having “fully mature guardrails.” That, says Sinch Chief Product Officer Daniel Morris, suggests governance alone is not fixing the problem. "The most advanced organizations aren't failing less; they're seeing failures sooner. Higher rollback rates reflect better monitoring and control, not weaker performance," Morris said in a press release. “If governance was the fix, the most mature teams would roll back less, not more. Our data points to a deeper issue.” According to the findings, 84 percent of AI engineering teams are spending at least half their time on safety infrastructure, leaving little time to develop AI. This is exacerbated by the fact that most firms said spending on AI trust, security, and compliance ranks ahead of AI development itself. “When 75% put trust, security, and compliance in that top three — ahead of AI development itself at 63% — that’s a finding about where the priority sits within their AI customer communications programs,” a Sinch spokesperson told us in an email. In other words, it seems like most organizations realize that their biggest issue with AI isn’t getting it working properly - it’s getting it to just work safely in the first place. “The operational cost of running AI safely at scale is much larger than most organizations expect,” the Sinch representative explained. The numbers don’t change based on organizational size or budget, either, Sinch told us. “The rollback rate holds consistently across every region and every industry in the study, which suggests size isn’t a meaningful protective factor,” the company said. “Rollback isn’t a symptom of under-investment or being too small to afford proper guardrails.” Of course, as a business communications service provider, Sinch linked its results back to AI customer service agents not being properly deployed on comms infrastructure designed for AI agents, a problem it’s naturally positioned to offer a fix for. Regardless, that three-quarter rollback figure doesn’t seem too out of place when you consider recent customer service automation news. As we’ve reported on multiple occasions, replacing customer service staff with AI hasn’t gone to plan for many businesses. Gartner said in June 2025 that half of organizations expecting AI to significantly reduce customer service headcount would abandon those plans by 2027. Sinch’s numbers suggest the problem may extend beyond staffing cuts to the AI agents themselves. Not that far-fetched when Gartner was already warning last year that fully agentless contact centers were not practical in the real world. "Our vendor evaluations reveal that a agentless contact center is not yet technically feasible, nor is it operationally desirable," Brian Weber, VP analyst in the Gartner Customer Service & Support practice, told The Register, adding that unexpected costs and unintended results were contributing to abandonment plans - just like what Sinch is reporting now. ®
Categories: Linux fréttir
Utah mega datacenter could dump 23 atomic bombs worth of energy per day
A proposed mega-scale datacenter in the US state of Utah has caused controversy after a physics professor estimated that the facility and its associated power generation could dump 23 atomic bombs' worth of energy per day. But the real question is whether it will actually ever get built. The datacenter is part of the Stratos Project Area in Box Elder County, Utah, overseen by the Military Installation Development Authority (MIDA), a state agency straddling the military, local government, and private developers. Creation of the Stratos Project Area, covering about 40,000 acres of land, was given the go-ahead in a May 4 announcement from the Box Elder County Commission, after delaying a vote amid residents’ concerns. At full buildout, the proposed Stratos campus could require up to 9 GW of power, making it one of the largest datacenter developments in the world. Meta’s planned Hyperion cluster is aiming for 5 GW, for example, while the first facilities hitting 1 GW are only expected to come online this year. For comparison, 9 GW is roughly comparable to New York City’s average electricity demand. Utah State University physics professor Dr Rob Davies estimated that the proposed Stratos campus and its associated natural gas power plant could dump energy equivalent to 23 atomic bombs per day into the surrounding Hansel Valley. Davies’ preliminary analysis said this could raise daytime temperatures by 2°F to 5°F (1°C to 3°C) and nighttime temperatures by 8°F to 12°F (4°C to 6°C), potentially causing serious ecological impacts in the high-desert valley. Not surprisingly, many have questioned Davies’ figures, especially as he doesn’t publish his math, with the topic debated on forums such as Reddit. However, even skeptics such as Andy Masley, a writer and researcher who claims to have taught high school physics, find that the math broadly checks out, so long as the bomb you measure it by is the one dropped on Hiroshima, which at about 15 kilotons, was much smaller than modern weapons. The key thing to bear in mind, however, is that an atomic bomb releases its energy all at once in the blink of an eye, whereas in the datacenter’s case, the release of the heat will be spread across 24 hours. However, the point Davies was making is that this will be extra energy being pumped into what is already a fragile desert environment, and the figures “strongly indicate the need for thorough and independent ecological assessment” of the impact of the Stratos Project. A recent study by a team at the University of Cambridge also suggested that datacenters can create heat islands, raising surrounding temperatures by several degrees at distances of up to 10 km (over 6 miles). This was met with skepticism by Omdia Senior Research Director Vlad Galabov, who told The Register that “Simple physics suggests that even very large datacenters contribute only a small additional heat flux when spread over kilometres.” The Stratos Project is intended as a long-term scheme, with a multi-year buildout, meaning that it may not reach full capacity for a decade, if at all. Reports suggest that the finance industry is becoming increasingly concerned about the level of borrowing that is needed to continue this datacenter build boom. The Financial Times reported recently that banks are looking for new ways to offload risks, with JPMorgan Chase and Morgan Stanley trying to distribute datacenter-related deals across a broader range of investors. CoStar Group also warned that construction costs for modern bit barn campuses have surged, thanks to massive upfront spending on land, power support systems and specialized construction, leading to large projects running into the billions of dollars. According to some estimates, building 1GW of AI datacenter capacity costs around $35 billion, with Nvidia’s figures said to peg the costs at $50 to $60 billion. If correct, the developers of the Stratos facility will be looking at costs in excess of $300 billion. Alan Howard, principal analyst for colocation and DC building at Omdia, puts the figure slightly lower, but still sees problems ahead. “Thinking about a rough $8m per MW, that would put datacenter construction at ~$8 billion for just building construction including power and cooling. The power generation and IT equipment would be on top of that, so the number would be over $100 billion,” he told The Register. “What’s important here is that the money comes from different sources: Stratos pays for site development; other companies will likely pay for building construction; even other companies will build and operate the onsite power generation; and even other companies will buy and operate the IT equipment." "The tricky part is the tepid climate for funding these big projects. While there will be multiple companies providing funding for different pieces, the debt financing underwriting process will look at the broader project as part of their risk assessment,” he stated. Developers in the US and elsewhere are also facing increasing opposition to datacenter projects from local communities, with projects being delayed or entirely canceled in response. ®
Categories: Linux fréttir
Mystery Microsoft bug leaker keeps the zero-days coming
The anonymous security researcher who has already maliciously exposed three Windows zero-days this year has revealed two more, dropping them just after Microsoft's monthly Patch Tuesday update. Nightmare-Eclipse, or Chaotic Eclipse, depending on which of their aliases you prefer, released details about YellowKey and GreenPlasma - respectively a BitLocker bypass and a privilege escalation flaw, handing SYSTEM access to attackers. Experts speaking to The Register warned that both vulnerabilities present serious security concerns, especially since Nightmare-Eclipse released substantial technical information about exploiting them. Nightmare-Eclipse described YellowKey as "one of the most insane discoveries I ever found." They provided the files, which have to be loaded onto a USB drive, and if the attacker completes the key sequence correctly, they are granted unrestricted shell access to a BitLocker-protected machine. When it comes to claims like these, we usually exercise some caution, as this bug requires physical access to a Windows PC. However, seeing that BitLocker acts as Windows' last line of defense for stolen devices, bypassing the technology grants thieves the ability to access encrypted files. Rik Ferguson, VP of security intelligence at Forescout, said: "If [the researcher's claim] holds up, a stolen laptop stops being a hardware problem and becomes a breach notification." Despite the physical access requirement, Gavin Knapp, cyber threat intelligence principal lead at Bridewell, told The Register that YellowKey remains "a huge security problem for organizations using BitLocker." Citing information shared in cyber threat intelligence circles, he added that YellowKey can be mitigated by implementing a BitLocker PIN and a BIOS password lock. Nightmare-Eclipse hinted at YellowKey also acting as a backdoor, allegedly injected by Microsoft, although the people we spoke to said this was impossible to verify based on the information available. The researcher also published partial exploit code for GreenPlasma, rather than a fully formed proof of concept exploit (PoC). Ferguson noted attackers need to take the code provided by the researcher and figure out how to weaponize it themselves, which is no small task: in its current state it triggers a UAC consent prompt in default Windows configurations, meaning a silent exploit remains a work in progress. Knapp warned that these kinds of privilege escalation flaws are often used by attackers after they gain an initial foothold in a victim's system. "These elevation of privilege vulnerabilities are often weaponized during post-exploitation to enable threat actors to discover and harvest credentials and data, before moving laterally to other systems, prior to end goals such as data theft and/or ransomware deployment," he said. "Currently, there is no known mitigation for GreenPlasma. It will be important to patch when Microsoft addresses the issue." Four, five… and more? YellowKey and GreenPlasma are the latest in a series of five Microsoft zero-day bugs the researcher has exposed this year. When Nightmare-Eclipse released BlueHammer (CVE-2026-32201, 6.5) - patched by Microsoft in April - they were described as a disgruntled researcher who has since been rumored to be a former Microsoft employee. According to their maiden blog post under the Chaotic Eclipse alias, the bug leak began after an alleged violation of trust. "I never wanted to reopen a blog and a new GitHub account to drop code," they wrote. "But someone violated our agreement and left me homeless with nothing. They knew this will happen and they still stabbed me in the back anyways, this is their decision not mine." In early April, the researcher leaked proof-of-concept code for Windows Defender exploits they called RedSun and UnDefend - another admin privilege escalation bug and denial-of-service flaw, respectively - as well as BlueHammer. Both RedSun and UnDefend remain unfixed, and according to Huntress, the proof-of-concept code released was quickly picked up and abused in real-world attacks. Ferguson described the exposure of YellowKey and GreenPlasma as the latest in an escalating, retaliatory campaign against Microsoft, and warned of more coming. "Prior releases include BlueHammer and RedSun, both of which attracted serious community attention and real forks," he said. "The same post linking yesterday's releases warns of another Patch Tuesday surprise and hints at future RCE disclosures. They claim to have a dead man's switch with more ready to go. This researcher has followed through on every prior threat." ®
Categories: Linux fréttir
Rust stalks IBM mainframes, but only in nightly form
IBM's effort to bring in-kernel Rust to its mainframe platform has taken a step forward, although anyone hoping to use it on production iron will need to be comfortable with a nightly Rust compiler for now. Engineer Jan Polensky has submitted a patch series titled "s390: enable Rust support and add required arch glue." If accepted, it will allow Rust code to be used in the Linux kernel on IBM mainframe hardware, which the kernel still refers to as s390 after the generation of IBM mainframe kit introduced in 1990. He notes: For now, using a nightly build of the Rust compiler does not sound to us like the sort of thing many conservative mainframe shops are likely to embrace with enthusiasm, but even big new features have to start somewhere. This is a significant step. When Rust was introduced into the kernel in 2022, The Register mentioned a problem that we rarely see raised elsewhere: while the kernel is generally compiled with GCC, the standard Rust compiler, rustc, is based on LLVM instead. Wikipedia has a list of LLVM backends, and although there are a growing number, it's a shorter list than GCC's 48. There is an experimental GCC front-end for Rust but it's not ready for the prime time yet. The Linux kernel itself has supported compilation using LLVM since kernel 6.9 over two years ago. At the moment, the kernel development team is still working on version 7.1, which at the time of writing is still on release candidate 3 – so relatively early days. Last month, we reported on its new NTFS driver and the removal of some fairly ancient hardware support. The final version of Linux 7.1 will probably appear about halfway through 2026, meaning that kernel 7.2 is still quite far off. It might be in time to appear in Ubuntu 26.10 – but then again, we suspect that very few IBM mainframe customers use interim Ubuntu releases. ®
Categories: Linux fréttir
Royal Household seeks £3M finance system fit for a King
The UK's Royal Household plans to spend £3 million ($4 million) on a new finance system to replace one that is more than 15 years old, the Buckingham Palace-based organization said in a procurement notice published on May 7. King Charles III gets to announce the British government's overall plans at the start of each parliamentary term but also has his own miniature civil service in the shape of the Royal Household, described by the procurement notice as a "public undertaking (commercial organization subject to public authority oversight)." The household received £86.3 million ($116 million) in taxpayers' money in 2024-25, known as the Sovereign Grant, and generated a further £21.5 million ($29 million) from activities including tours of Buckingham Palace. The King also receives private income from the Duchy of Lancaster, an estate held in trust for the sovereign, while the Royal Collection Trust, a charity, cares for the royal art collection and runs visitor attractions including galleries at Buckingham Palace and Holyroodhouse in Edinburgh. According to the procurement notice, the Royal Household contacted suppliers on various Crown Commercial Service (now Government Commercial Agency) frameworks for information last year. It now plans to award a contract for financial software, implementation, training, and support, initially for five years from September 30, 2026, with a possible two-year extension, covering the household and a number of affiliated organizations. To manage this project, last year the household's Privy Purse and Treasurer's Office – its back office division that runs finance, human resources, technology and facilities management – advertised for a finance systems technical project manager, a two-year fixed contract paying £60,000 to £65,000 annually. According to the job advert, the role requires "a proven track record of delivering successful ERP or finance system projects" and the ability to "tailor your style to suit technical and non-technical audiences alike." As well as a 15 percent pension contribution, perks of leading the installation of His Majesty's new financial software include free entry to royal locations and 20 percent off at Royal Collection Trust gift shops. ®
Categories: Linux fréttir
Microsoft aims to speed Windows with 'leap forward' in WinUI 3 perf
Microsoft claims to have achieved a "leap forward" in performance for WinUI 3, the current native framework for Windows apps, with a 25 percent improvement for the parts of File Explorer coded using this framework. Software engineer lead Beth Pan posted figures for the WinUI portion of File Explorer, showing 41 percent fewer memory allocations and 45 percent fewer function calls. She added that some optimizations "involve small or large breaking changes," so they will be opt-in at first for developers using the framework. The plan is for the optimizations to become the default in future versions of WinUI and the Windows App SDK, with opt-out available when needed. The new optimizations are part of a push to make Windows more responsive. In March, Windows boss Pavan Davuluri promised to improve the quality of the operating system, including a commitment to a "faster and more dependable File Explorer." His post noted that Microsoft intends to "move more experiences to WinUI 3" for faster responsiveness. Pan's post is bittersweet for developers. Performance issues with WinUI 3 have been well known for years. Although Microsoft calls it a native framework, that is a stretch. WinUI 3 is based on WinRT (Windows Runtime), a component interface first used in Windows 8 that sits between application code and the underlying Win32 API, which has a better claim to being native. An advantage of WinUI 3 is its support for Fluent UI, the Windows design system. Developers using WinUI 3 get the Windows 11 look and feel, but not the best performance. "WinUI 3 is currently measurably slower than both WPF [Windows Presentation Foundation] and UWP [Universal Windows Platform]… this is NOT OK," said one comment. Another said that "you can't build a WinUI app and call it smooth at the same time." Component vendor DevExpress has also posted about WinUI 3 performance issues. The company stated that WinUI component architecture has the potential for fast rendering and animation, but that "unfortunately, each action within a component requires WinRT interop, which is slow." These concerns undermine Davuluri's hope that using more WinUI 3 will fix Windows 11 performance, unless the framework itself is improved, as Pan now claims. Another longstanding gripe among Windows devs is that Microsoft's developer division has created frameworks that the Windows and Office teams have not always adopted consistently. Internal tensions go back many years. Some may still remember early builds of "Longhorn," the code name for Windows Vista, having to be reworked before Vista's eventual release in 2007 because of performance issues with .NET. This caused distrust of .NET in the Windows team. "What you need to do is actually use your framework across the company," said another comment. Pan replied, insisting "that's the push." This is exactly what developers using WinUI 3 want to hear, but the long and tangled history of Windows UI frameworks suggests that a consistent and enduring company-wide approach is unlikely. ®
Categories: Linux fréttir
SpaceX sets date for Starship test that asks: Did we break anything in the upgrade?
SpaceX has named the earliest date for the next Starship launch – May 19. The company has already completed a Wet Dress Rehearsal (WDR) so the next step is to cross fingers and launch the stainless steel behemoth. The launch window for the 12th flight test of Starship opens at 5:30 pm CT, and, as with previous test flights, the vehicle will be on a suborbital trajectory. The launch, from an entirely new pad, will be the first of SpaceX's third-generation Starship and will validate that there have been no inadvertent regressions. Raptor 3 engines power the Starship and Super Heavy Booster. On the booster, SpaceX has reduced the number of grid fins used during recovery from four to three, increased their size by 50 percent, and added a new catch point, although there are no plans to catch the booster on the next flight – it is destined for the Gulf of Mexico. SpaceX wrote: "As this is the first flight test of a significantly redesigned vehicle, the booster will not attempt a return to the launch site for catch." For Starship, changes include a redesign of the propulsion system, increased propellant tank size, and improvements to the reaction control system. The Starlink dispenser mechanism has also been updated to increase satellite deployment speed – 22 mass simulators will be carried on this mission. Ultimately, this is a considerably enhanced rocket, with more powerful engines, and a new launchpad and tower. Hence the need to demonstrate that nothing has been broken along the way. The objectives are therefore familiar. SpaceX will call the flight of the booster a success if there's a successful launch, ascent, stage separation, boostback burn, and finally a landing burn in the Gulf of Mexico. Starship's objectives are to deploy Starlink simulators, which will also be on a suborbital trajectory to burn up harmlessly, restart a single Raptor engine, and survive a controlled re-entry, although the vehicle will not be recovered for reuse this time around. Two of the Starlink simulators will be able to scan and capture imagery of the vehicle's heat shield, allowing engineers to assess its readiness for return to the launch site on future missions. In addition to painting some heat shield tiles white to simulate missing tiles for the imaging test, a single tile has been removed to see what happens to the surrounding tiles. Finally, the vehicle will attempt a dynamic banking maneuver to mimic the trajectory of a future return-to-Starbase mission. ®
Categories: Linux fréttir
Greater Manchester still says no to NHS data platform with Palantir at its heart
One of the UK's biggest health regions has doubled down on its decision not to join the NHS Federated Data Platform (FDP), owing to concerns over its lead supplier, Palantir, and a lack of evidence for the technology's benefits. Greater Manchester Integrated Care Board (ICB), which manages health services for 2.8 million people, deferred a decision on whether to sign up to the FDP last year. It is the only ICB in England to do so. A board meeting in May 2025 heard that NHS England had not addressed the ICB's concerns around risks. The ICB added that Greater Manchester's capability in data analytics was greater than what the FDP currently offered. In a November meeting, Greater Manchester ICB said it would review its position. However, a recent Freedom of Information response said that review was now off the table, and the ICB would stick with its decision not to join the FDP. "It was proposed that a paper would be produced in due course to guide a review of the ICB position," the response said. "This paper has not been produced yet and work has not started on this paper because it's clear that the public concerns have heightened rather than diminished since the deferral decision has been taken and there does not appear to be any compelling evidence that the value proposition for NHS GM from FDP has materially changed in favour of adoption." NHS England has been offered the opportunity to comment. The FDP was created by Palantir under a much-criticized £330 million procurement for a seven-year contract awarded in November 2023. NHS England signed the deal after it awarded £60 million to the vendor without competition during the pandemic. The FDP is designed to improve information flow through various NHS organizations and reduce the backlog in non-urgent "elective care," which skyrocketed during the COVID-19 outbreak. NHS England confirmed that Palantir staff could access patient data following a change in policy, provoking outrage from those concerned about the US spy-tech firm's position at the heart of NHS data after a series of outspoken political positions from its leadership. Last month, the junior minister responsible for the FDP said the government would consider using a break clause in the FDP contract to remove Palantir, although he defended the system's performance. Liberal Democrat MP Martin Wrigley claimed the NHS was locked into the Palantir contract and owned none of the software or intellectual property resulting from it. ®
Categories: Linux fréttir
Microsoft gives Windows Update a Ctrl-Z for bad drivers
Microsoft is getting to grips with Windows drivers that leave the operating system in an unstable state with a proactive rollback dubbed "Cloud-Initiated Driver Recovery." "When a driver is identified as having quality issues during our shiproom evaluation process, Microsoft can now initiate a recovery action from the cloud, replacing the problematic driver on affected devices without requiring manual intervention from the user or the hardware partner," the company explained. The process applies to drivers distributed through Windows Update. Microsoft's partners can use the Windows Update service to distribute new code, but sometimes things go awry, and no amount of finger-pointing from Redmond will dispell the general feeling of instability and unreliability that surrounds the company's flagship operating system. A cynic might wish that bugs were ironed out before faulty drivers are inflicted on users via Windows Update, or wonder why this functionality was not already present, but this is a step in the right direction. Previously, if a vendor found a problem with a driver, remediation involved either a swift driver update or having the user perform manual steps to remove the faulty code. In some cases, users could be stuck with a defective driver for an extended period. The change means Microsoft can now proactively kick off recovery action, rolling back the code to the last known good version via Windows Update. The hardware partner doesn't have to do anything to get the balky code off user computers – it's all handled by Microsoft. We would, however, expect some words with the vendor and the code fixed before long. For Windows device users, the change is expected to improve quality and reliability. Driver partners, meanwhile, can leave Microsoft to handle rollbacks when defects are detected. The change should also be transparent to users, although partners will need to be aware of what is happening. Microsoft wrote: "We encourage partners to continue monitoring their driver quality metrics in the Hardware Dev Center dashboard and to respond promptly to any shiproom feedback on rejected submissions." Rollout will happen over the coming months. ®
Categories: Linux fréttir
London cops hail fixed facial recognition cams after suspects collared every 35 mins
London's Metropolitan Police Service (MPS) is giving its six-month trial of static live facial recognition (LFR) cameras credit for helping it secure an arrest every 35 minutes. The Met's LFR chief said the results show why LFR is "such a powerful tool" for coppers, who, across 24 operations between October 2025 and March 2026, made 173 arrests. Those arrested included people suspected of kidnapping and sex crimes, as well as others who had evaded law enforcement for decades. Among those 173 arrests was that of a 36-year-old woman who had been wanted by the police after failing to appear at court for an assault in 2004. The Met also arrested a 31-year-old man, wanted for more than six months in connection with voyeurism, and a 41-year-old man suspected of rape in November 2025. Thirty-seven of the total 173 arrests related to those who had breached their court-imposed conditions, the Met said. Nilton Darame, 25, was one of these individuals who had violated his electronic tag conditions and was found by a static camera alert in October last year, say cops. He was arrested on suspicion of intentional strangulation and two counts of assault on an emergency worker. In January, he was sentenced to 18 months in prison. In November last year, the Met continues, Kastriot Krrashi, 35, was clocked by a Croydon LFR camera and stopped by officers on suspicion of breaching his conditions as a registered sex offender. He was later sentenced to six months in prison. Officers were also alerted in January when LFR cameras identified Neville Cohen, 55, who was wanted after failing to attend Croydon Police Station in October 2025, as required by his Sexual Harm Prevention Order (SHPO). The MPS said that after he attempted to flee from officers on the day, he was arrested and later sentenced to four months in prison. Lindsey Chiswick, national and Met lead for LFR, said: "These results show why live facial recognition is such a powerful tool when it's used carefully, openly, and in the right places. Crime in this area is down by more than 10 percent, and the public can see the difference. "This technology is helping us find people wanted by the courts, identify serious offenders quickly, and focus our resources where they make the biggest impact, all with exceptional accuracy. "We will continue using static cameras in Croydon as part of our regular live facial recognition deployments, which play a vital part in keeping London safe." The tech that helped secure these arrests was deployed in Croydon as part of the Met's first trial of fixed LFR cameras. Two cameras were deployed, one at each end of the town's High Street. UK police typically use mobile forms of LFR, such as cameras that sit inside highly visible vans, marked with the usual fluorescent, reflective police decals seen on patrol cars. These are usually situated in high-footfall areas across major towns and cities, and are only activated when the public is given prior warning and during limited hours. Croydon's fixed cameras are instead permanently installed onto existing infrastructure, such as lampposts, but are not running 24/7. Like the LFR vans, they are only activated during defined operational timeframes and when there are police officers in the vicinity. The MPS says that since the fixed cameras are monitored remotely, it frees up LFR vans to be deployed in other parts of the capital. It wouldn't be an LFR announcement without a comment on accuracy. The Met said that more than 470,000 individuals walked past the two Croydon cameras during the trial's operational periods, and only one false positive was registered. That individual was not arrested, and no one has ever been arrested as a result of a false positive LFR flag, the Met added. Ever-controversial tech Despite the accuracy figures cited by the MPS, civil liberties groups consistently campaign against the use of LFR. One of the UK's loudest anti-LFR campaigners, Big Brother Watch, regularly labels the technology "dystopian" and called the permanent Croydon installations "chilling infrastructure." Big Brother Watch recently lost a High Court battle in which it represented anti-knife crime campaigner Shaun Thompson, who received a settlement from the Met after LFR technology wrongly flagged him and officers stopped and questioned him. Thompson claimed he handed officers his passport and bank cards, but they remained unconvinced that the LFR detection was false. He said the technology was tantamount to "stop and search on steroids," referencing the controversial policing tactic. The group tried to curtail the Met's LFR use by arguing it violated several human rights, but the High Court ultimately ruled in the police's favor. ®
Categories: Linux fréttir
Linux gains more critical Windows apps: 3D Movie Maker and Space Cadet Pinball
Thanks in part to a Register reader and skilled programmer-archeologist, we bring you news of yet more vital enterprise Windows tools that have been brought to Linux. All right, we know, nobody was holding back their Linux deployment until Microsoft 3D Movie Maker was made available, but it's here now. It's not the only familiar mid-1990s Windows app with a fresh new version either. Around the same age is the Space Cadet Pinball game, first released as part of the Microsoft Plus Pack for Windows 95, 31 years ago. The Reg FOSS desk found itself somewhat moved by Lily Siwik's recent blog post about "Childhood Computing," in which she waxes nostalgic. The opening probably resonates with many Register readers: Fair enough. We can relate. This vulture was about 13 when he got his first one. It was the opening of the next paragraph that gave us pause: Look, we are all too well aware, it just stings a little, OK? She continues a little later: In our humble opinion, starting out on Windows XP really is not a gentle, welcoming introduction to the wonders of computers – it's what drove us to run Linux on the desktop full time, as well as spending considerable time and money upgrading an elderly PowerMac we'd been given so that it could run the new Apple Mac OS X, thanks to Ryan Rempel's wonderful XPostFacto. But yes, if Windows XP was what your first-ever computer ran, then we can see why you might be nostalgic for it. A few years ago, we wrote "Want to live dangerously? Try running Windows XP in 2023," and quite a few people seemed to enjoy it. You can install 64-bit Windows XP on a 21st century dual-core 64-bit laptop and get all its hardware working. It takes a while to do it, but it goes like a rocket if you do. If you take it for a spin on the internet, it will probably get infected with some vintage malware just as fast. You are not going to get that far with Windows 95, though. If you have more than 480 MB of RAM, it won't even start, and the great Raymond Chen explained why in 2003. Space Cadet Pinball A few years ago, The Register reported on how he explained the untimely demise of 3D Pinball for Windows. This was a follow-up to Chen's 2012 post "Why was Pinball removed from Windows Vista?" But never fear: the good news is it's back. Last week, Oracle Linux developer Stephen Brennan blogged about how to get Space Cadet Pinball on Linux. Space Cadet was just one table from a collection in the game Full Tilt! Pinball and the Windows original of that can be found on GitHub. Apparently, it's possible to get it running in some way on Windows 10, although we confess we have not tried. The Space Cadet table was included in several versions of Windows, and so that's the one many people remember best. The code for that has been decompiled, rebuilt, and ported to some 14 different platforms, and thanks to Muzychenko Andrey, you can find it on GitHub too: SpaceCadetPinball. Although it's not in the list there, one of these platforms is Linux, and as a result, Muzychenko's version is on Flathub. 3D Movie Maker We suspect that Full Tilt! Pinball was never officially open sourced, but as it is 31 years old, current Maxis owner Electronic Arts doesn't really care. Microsoft 3D Movie Maker, though, is a different matter. As The Reg reported in 2022, Microsoft released it as open source, and the source code is still right there. Microsoft may not have modified it in all that time, but that doesn't mean it's untouched. Reg reader Mark Cave-Ayland wrote to let us know: What's more, last month, Mark wrote up their efforts and described much of what he and Ben had to do in order to get it working, over a lavishly detailed two-part blog post: "Porting 3D Movie Maker to Linux - part 1" in early April and 11 days later, "Porting 3D Movie Maker to Linux - part 2." It's not only a native port, they also had to do more work to make it 64-bit clean, add native file load and save dialog boxes, MIDI background music via FluidSynth, and a video player powered by GStreamer – among other things. Currently, they're looking at making a Raspberry Pi version as well. ®
Categories: Linux fréttir
dBase debased: Database titan fades to black after 47 years
It looks like a popular blog post about the decline and fall of dBase has knocked the long-moribund database's website offline. Sic transit gloria mundi? We were rather entertained by a recent blog post on "Delphi Nightmares" mourning the passing of the online store for the dBase website: dBase: 1979-2026. When the post went up, the online shop at store.dbase.com was still online, but since the post was shared on Hacker News yesterday, even that has gone. One could say that after 47 years, dBase has finally been debased. It's an interesting telling of the decline and fall of what was once an industry titan, and for us, the disappearance of the site itself once the blog post went up is just the cherry on top. Indirectly, what turned into dBase started out as a tool called JPLDIS, written for the Jet Propulsion Laboratory's three Univac 1108 computers. A FORTRAN rewrite of the simpler Tymshare RETRIEVE [PDF] tool, it was started by Jack Hatfield and finished by Jeb Long. C. Wayne Ratliff then rewrote it in Intel 8080 assembly language for PTSDOS on his IMSAI 8080, and tried to sell it under the name Vulcan: he put an advert in BYTE Magazine, offering it for $50. It wasn't a hit, as he recounted in an interview with Susan Lammers. Serial entrepeneur Ed Tate hired him and licensed Vulcan. Tate set up a new company called Ashton-Tate – there was no Ashton, but he later bought a parrot, named it Ashton and made it the mascot. Ashton-Tate renamed the database to dBASE II – to sound more mature – raised the price dramatically, and sold the CP/M version as shrink-wrap software.The late John Walker noted in 1982 that it was "selling like hotcakes at $800 a pop." That same year, a PC version of dBase II became one of early commercial business applications for IBM's new PC. Former dBase Developer's Bulletin editor Jean-Pierre Martel's personal history of dBASE recounts how it remained one of the industry-standard apps throughout the 1980s. In 1984, the enhanced dBase III did even better, followed in 1986 by dBase III+, with a menu-driven UI as well as the infamous "dot prompt" command line. In 1988, dBase IV followed, but didn't include the promised compiler for the dBase programming language. This opened up opportunities for rivals. Nantucket's Clipper was one, which could compile dBase code into applications. It was already out there: because it didn't include the interactive language, that meant it didn't have the same primary UI, which protected it from being sued. Clipper ended up acquired by Computer Associates. Fox Software's FoxBase, later FoxPro, was another, and even Ratliff himself was impressed. Microsoft eventually acquired FoxPro. There were many others, and that was the real program for Ashton-Tate and the dBase product: its programming language became standardized, and because of trademark issues, known as xBase. Even before the era of "open source," there was a DOS shareware app called WAMPUM, which is still out there. There are a number of FOSS implementations, including Harbour and its fork xHarbour. The Harbour GitHub repo has seen some activity this year, and the xHarbour one some too. Once your expensive proprietary app's file format and programming language escape into the wild and become partially standardized, that can make it hard to keep making money from it. It looks like that finally spelled the end for dBase LLC… but in the meantime, the xBase language is alive and reasonably well considering its advanced age for a bit of software. ®
Categories: Linux fréttir
This browser add-in doesn't just hide ads, it tells you to OBEY
A fork of uBlock Origin Lite doesn't just remove the ads from web pages; it replaces them with tiles containing slogans from John Carpenter's 1988 film They Live. Published by Australian Dave Lawrence, the Chromium add-in (so it'll work in browsers such as Chrome and Edge) takes the uBlock Origin Lite content blocker (also known as uBO Lite) and tweaks it so that rather than simply hiding the ads, the ads are replaced with white boxes containing slogans from the movies. Lawrence listed them: "OBEY, CONSUME, WATCH TV, SLEEP, SUBMIT, CONFORM, STAY ASLEEP, BUY, WORK, NO INDEPENDENT THOUGHT, DO NOT QUESTION AUTHORITY." But sadly, nothing along the lines of "THIS AD IS HERE SO YOU DON'T HAVE TO PAY TO KEEP THIS SITE RUNNING." "Each blocked ad gets a single phrase, picked at random from the list," Lawrence explained in the project's repository. The uBlock Origin project is not involved, and Lawrence noted that only ads blocked by cosmetic filters get the They Live treatment. Custom user-defined cosmetic filters still hide ads normally. They Live is a science-fiction horror film in which the protagonist dons a pair of glasses that allow him to see the world as it truly is: run by ghoulish aliens using subliminal messaging to keep the population under control. Lawrence used Claude Code to add the They Live mode to the ad blocker, which might worry some, given concerns in some parts of the open source world about projects drowning in a tsunami of AI slop. However, Lawrence is upfront about the use of AI coding tools, and the add-in is certainly amusing. Ad blocking is a controversial subject. Google's changes to its browser extension architecture (dubbed Manifest v3) were expected to make content-blocking and privacy extensions less effective, but the reality turned out differently. The proprietary browser extension, Pie Adblock, also came under fire last year for allegedly lifting code and text from uBlock Origin, in violation of the latter's GPLv3 license. The license for Lawrence's fork is also GPLv3, the same as upstream uBlock Origin/uBO Lite. ®
Categories: Linux fréttir
SAP U-turn brings AI features to ECC and on-prem S/4HANA
SAP has navigated an apparent U-turn to bring AI to its legacy and on-prem ERP systems, as predicted in The Register earlier this year. The German software giant had been steadfast that it would not introduce "innovation" such as AI for on-prem systems, including its legacy ERP platform ECC, a move that outraged some members of the user community. In July 2023, CEO Christian Klein told investors SAP's "newest innovations and capabilities" would only be delivered in the public or private cloud using RISE with SAP, the vendor's lift-shift-and-transform program launched with partners and cloud providers in early 2021. "This is how we will deliver these innovations with speed, agility, quality and efficiency. Our new innovations will not be available for on-premise or hosted on-premise ERP customers on hyperscalers," he said at the time. However, during SAP's Sapphire conference in Orlando this week, Klein said there was "no confusion at all." New tech, such as the AI agents built on the SAP Joule platform, would be available to customers on-prem, so long as they had signed up for a cloud "journey," he said. "The majority of our Joule assistants and agents [will be available] also on-prem, on ECC, and S/4HANA for customers that have already committed the majority of the landscape to the journey, as an interim solution so that they can benefit from AI while they are modernizing. That's absolutely the right thing to do, and I'm excited to see customers taking advantage of this." In March, Alisdair Bach, head of SAP practice at consultancy Dragon ERP, predicted that SAP AI agents based on Joule would become available for on-prem systems after The Register revealed SAP's plan for migrating customers to the cloud was €2 billion off target in terms of declining on-prem support revenue. Muhammad Alam, executive board member for product and engineering, told the conference SAP would bring "a significant percentage of Joule assistants and agents to work in hybrid landscapes with the ability to connect to your S/4 on-premises and ECC landscape." He said the offer would be available to "customers that have started their modernization journey on RISE with SAP." The new capability will be available in a bundle of services under the new Business AI Platform banner, he said. "We've done this so you can start generating value from AI today on the SAP Business AI Platform while you're modernizing your estate." It will only be available to customers signing up for the Max Success Plan, a commercial deal. The company announcement said the plan would enable customers to fast-track assistant and agent activation, and allow "customers to adopt AI, including cloud and eligible on-premises systems, at their own pace as they move to the cloud." General availability is planned for May 2026. ®
Categories: Linux fréttir
ZTE advances intelligent network monetization strategy at AGC2026, empowering ISPs for sustainable growth
Partner Content ZTE Corporation (0763.HK / 000063.SZ), a global leading provider of integrated information and communication technology solutions, participated in the ABRINT Global Congress 2026, presenting a comprehensive and forward-looking portfolio of solutions tailored to internet service providers (ISPs), telecom operators, and enterprise customers in Brazil. The event, held from May 6 to 8 in São Paulo, served as a strategic platform for ZTE to demonstrate its vision for intelligent broadband monetization and digital infrastructure evolution. Aligned with the ongoing digital transformation of Brazil, ZTE is committed to enabling customers to enhance revenue generation capabilities, accelerate network modernization, and strengthen the foundation for sustainable growth. According to Leo Lu, Vice President of ZTE, President of ZTE Brazil, Brazil continues to represent one of the most dynamic and strategically important fixed broadband markets globally. "Brazil remains a highly dynamic broadband market with strong growth potential. ZTE has established a solid and long-term presence in the country, working closely with operators and over one thousand ISPs to promote large-scale FTTx deployment and industry advancement," he stated. A central pillar of ZTE's strategy is to support ISPs in transitioning from traditional connectivity providers to integrated digital service providers. This transformation is driven by expanding FTTx capabilities into service-oriented and experience-centric offerings. In this context, ZTE highlights its FTTR-B (Fiber to the Room for Business) solution, designed to address enterprise scenarios such as SMEs, commercial environments, and industrial parks. In parallel, ZTE provided intelligent experience management and precision marketing solutions based on user profiling and behavioral analytics, enabling operators to enhance customer engagement, improve retention, and increase ARPU. ZTE continues to advance core broadband technologies, including 10G PON, FTTH, and fully optical networks, establishing a robust, high-performance, and future-ready network foundation. "ZTE remains focused on driving the evolution of fixed broadband networks by promoting high-speed, stable, and sustainable infrastructure capabilities," added Leo Lu. Enhancing Network Efficiency and Infrastructure Modernization In the transport domain, ZTE introduced its Light OTN solution, designed to address the needs of cost-sensitive operators while maintaining high efficiency and scalability. By integrating optical and service layers into a compact architecture, the solution enables simplified deployment, plug-and-play operation, and flexible capacity expansion aligned with traffic growth. Within the IP network domain, ZTE presented a converged architecture aimed at streamlining multi-layer network operations, reducing O&M complexity, and accelerating service rollout. The solution supports automated service migration and is designed for long-term evolution, including readiness for 800GE. Expanding Capabilities: Wi-Fi 7, Computing, and Energy Integration ZTE also showcased its latest Wi-Fi 7 solutions, addressing both residential and enterprise application scenarios, alongside advanced server infrastructure and edge data center solutions. Leveraging local manufacturing and supply chain capabilities in Brazil, ZTE is able to optimize delivery efficiency, reduce operational costs, and strengthen local ecosystem support. These capabilities enable ISPs to expand into higher-value services, including cloud computing, localized content delivery, video services, and AI-driven applications. "ZTE is accelerating its strategic transition from traditional connectivity toward 'connectivity + computing', empowering customers to evolve into comprehensive digital service providers," emphasized Leo Lu. In addition, ZTE presented integrated energy solutions, including power systems and energy storage for telecom sites, data centers, and edge nodes. These solutions combine photovoltaic generation, energy storage, and intelligent energy management, contributing to improved energy efficiency, cost optimization, and operational resilience. Strengthening Industry Collaboration and Future Vision During the event, ZTE hosted an industry engagement session to exchange insights with ecosystem partners, focusing on key themes such as ISP business transformation, revenue growth through FTTR-B, cloud and computing integration, Wi-Fi 7 deployment, and cost optimization through lightweight network architectures and energy-efficient solutions. Looking ahead, ZTE reaffirmed its long-term commitment to the Brazilian market and its role in supporting the development of next-generation digital infrastructure. "ZTE will continue to deepen its presence in Brazil, strengthen collaboration with industry partners, and jointly build fully optical, intelligent, and sustainable networks to support the development of the digital economy in the AI era," concluded Leo Lu. Contributed by ZTE.
Categories: Linux fréttir
Civil servants to protest outside Capita AGM over pension shambles
Capita's annual general meeting next week is set to come with an unexpected item on the agenda: angry civil servants protesting over missing pensions, broken systems, and a data breach affecting pension scheme members. Members of the Public and Commercial Services (PCS) union will gather outside Capita's AGM at Sheldon Square, London, from 9:45am on May 18 to demand that the government strips the outsourcer of responsibility for administering civil service pensions after months of delays, botched portal launches, missing payments, bereavement failures, and a data breach that exposed members' personal information. PCS said Capita's handling of the scheme has left "thousands of retired civil servants without their pensions," while bereaved spouses face long waits for payments and future retirees are left worrying whether their income will materialize at all. The protest is the latest twist in what has become one of Whitehall's messiest outsourcing debacles. Capita took over administration of the Civil Service Pension Scheme in December under a £239 million contract covering around 1.5 million current and former civil servants – and things went sideways almost immediately. Users of the new pension portal were quick to complain about login failures, broken links, and unfinished-looking pages after the launch. MPs later heard the system went live without full functionality in place and struggled to handle the volume and complexity of cases transferred from the previous administrator, MyCSP. PCS said delays affected around 8,500 newly retired civil servants, while Capita said it inherited an 86,000-case backlog from MyCSP, many already overdue. The problems did not stop at missing pensions. In April, Capita confirmed that a flaw in the system briefly exposed pension data for other members for about 35 minutes, affecting 138 people. The breach prompted scrutiny from the Information Commissioner's Office and further fury from unions already accusing the company of turning the pension scheme into a slow-motion catastrophe. PCS general secretary Fran Heathcote previously described the situation as a "fiasco" and argued each fresh failure strengthened the case for bringing critical public services back in-house rather than handing them to contractors. Capita, meanwhile, continues to insist that inherited backlogs and unexpected case complexity contributed to the mess, while government officials acknowledged that performance fell well below expectations after go-live. Capita refused to comment. ®
Categories: Linux fréttir
ZTE hosts 2026 Broadband User Congress in São Paulo, under the Theme "Monetize Your Intelligent Broadband"
Partner Content ZTE Corporation (0763.HK / 000063.SZ), a global leading provider of integrated information and communication technology solutions, successfully hosted the fifth edition of its Broadband User Congress, themed "Monetize Your Intelligent Broadband", in São Paulo, Brazil. From Colombia at the foot of the Andes, to Mexico as a North American hub, and to Brazil as Latin America's largest market, ZTE continues to explore new pathways in broadband business across the region. The event brought together more than 300 senior executives from leading ISPs, operators, local government, industry associations as well as industry experts and ecosystem partners across Brazil and Latin America Driven by the dual engines of broadband network cost reduction and efficiency improvement as well as home service innovation, the congress presented a comprehensive suite of intelligent broadband monetization solutions tailored for segmented scenarios in Latin America. These offerings help operators and ISPs break away from traditional pipe-only business models, boost basic network ARPU, diversify revenue sources and reshape core industry competitiveness. Fang Hui, Senior Vice President of ZTE, delivered a keynote address at the conference and stated: "With over 20 years of presence in Latin America, ZTE has delivered hundreds of landmark projects and served over 100 million users in Brazil. Moving forward, we will drive premium user experiences through technological innovation, unlock growth potential through win-win partnerships, and empower industrial transformation with AI. Leveraging our full-stack 'Connectivity + Computing' capabilities, we are committed to building an open, intelligent digital ecosystem and co-creating new value with Latin American partners." At the event, targeting operator and ISP markets, ZTE showcased innovative network operation concepts and upgraded product and solution portfolios, translating the strategic vision of "Monetize Your Intelligent Broadband" into deployable and profitable commercial practices. For operator broadband network construction, ZTE is committed to building the best-in-class broadband network for every customer. In the access network field, ZTE's FTTx monetization solutions lower network deployment barriers via lightweight OLTs, adopt CEM+AI to enable precise quality analysis and targeted marketing, and expand into the B2B blue ocean market with innovative products including AI all-optical campus and AI Interactive Flat Panel, balancing cost reduction, efficiency improvement and ARPU growth, continuously unlocking incremental network value. Together with the end-to-end intelligent ODN system, they provide strong assurance for the full lifecycle of the network. In the transport network field, ZTE launched C+L full-band 1.6T OTN solution enhanced with AI. It delivers breakthroughs in single-wavelength rate, spectrum efficiency and intelligent O&M, addressing challenges of scaling, cost reduction and agile service delivery. Meanwhile, ZTE launched single-slot 28.8 Tbps core router together with high-performance 100GE/400GE aggregation routers. With AI traffic optimization, AI security protection and AI dynamic energy saving, these products enable operators to build next-generation IP networks that are ultra-broadband, green, secure and intelligent, laying a solid foundation for broadband value upgrade. In smart O&M, the AIOps platform significantly improves fault diagnosis efficiency. It reduces OPEX and drives evolution toward L4 autonomous networks. For home broadband, ZTE leverages strong technology and continuous AI innovation to ensure optimal TCO. Quality assurance, intelligent control, and long-term evolution are the foundation of this approach. In smart connectivity, AI Wi-Fi 7 serves as the core. It leverages advantages in specifications, coverage, control, hardware and software as well as supply chain, helping operators optimize TCO and achieve precise selection. Customized package strategies enable operators to move beyond price-driven competition and build differentiated competitiveness. In smart home value-added services, large-model capabilities empower AI O&M, AI cameras and AI Smart View. These create new home experiences that integrate smart control, fitness, entertainment and security, driving the shift from basic connectivity to high-value services. In smart operations, the SCP platform enables unified management of all home devices and supports remote diagnostics, one-click optimization, stolen-device locking and targeted VAS marketing. It reduces O&M costs, stabilizes revenue and helps operators build efficient systems for sustainable monetization. ZTE empowers ISPs to increase revenue by enabling lightweight deployment and converged efficiency. Light PON enables fast and cost-effective network deployment, shortening time-to-market and helping ISPs seize early opportunities. Light OTN features a 12.8T-in-2U high-density design with minimalist WebGUI management, supporting single-wavelength 1.6T transmission, zero-touch deployment and plug-and-play activation, reducing deployment costs and O&M complexity while ensuring optimal TCO. Light IP Network provides end-to-end lightweight IP convergence from CPE to access, aggregation and backbone. Built on unified open architecture, smooth product evolution and AI-powered minimalist O&M, it enables heterogeneous network integration and safeguards sustainable ISP operations. Beyond the above scenario-based solutions, the event showcased three major highlight partnerships. ZTE launched the new-generation TV 3.0 set-top box, marking a new phase in Brazil's digital TV upgrade. ZTE and MediaTek jointly launched Wi-Fi 7 and 10G PON solutions tailored for premium home and small-and-medium business scenarios, enabling operators to tap high-value user groups. Furthermore, Qualcomm and ZTE are working together to shape the next generation of networking infrastructure for the AI Era. This collaboration brings together Qualcomm's AI‑native Wi-Fi and FWA platforms and ZTE's leadership in access and networking solutions. Lu Maoliang, President of ZTE Brazil, commented that Brazil serves as a core strategic market in ZTE's global layout. With 25 years of localized operation in Latin America, ZTE has provided services for over 100 operators and ISPs, has deployed more than 60,000 kilometers of optical fiber, and has reached over 30 million household users across the region. He noted that the congress precisely addresses local clients' demands, aiming not only to deliver leading technologies and products, but also to focus on driving sustainable commercial success for customers. Looking ahead, guided by the vision of "Monetize Your Intelligent Broadband", ZTE will further deepen its footprint in Brazil and the broader Latin American market. Leveraging its full-stack "Connectivity + Computing" technological capabilities, ZTE will partner with local operators and ecosystem partners to drive the evolution of intelligent broadband from network coverage expansion to value-based operation. The company will facilitate high-quality, sustainable growth of the regional communications industry and jointly build a new blueprint for the development of Brazil's digital economy. Contributed by ZTE.
Categories: Linux fréttir
ZTE and MediaTek unveil Tri-band Wi-Fi 7, targeting a relatively unexplored premium niche in Brazil
Partner Content ZTE Corporation (0763.HK / 000063.SZ), a global leading provider of integrated information and communication technology solutions, and MediaTek, a global semiconductor company and leader in the smartphone processor market, unveiled a joint strategy at the 2026 ZTE Broadband User Congress. The two parties will expand premium connectivity product portfolios tailored for high-demand residential users and small businesses in Brazil, addressing their needs for advanced technology, comprehensive coverage, ultra-high speed and low-latency network performance. During the meeting, the companies presented the benefits of tri-band Wi-Fi 7, which operates simultaneously on 2.4 GHz, 5 GHz, and 6 GHz. The adoption of the 6 GHz band, in addition to the already established bands, reinforces the gain in capacity and stability, improving the experience in both homes and small businesses, especially in scenarios with multiple connected devices and higher density of Wi-Fi networks. "In Brazil, there is a clear niche of consumers and businesses that need an above-average connectivity experience, with higher performance, lower latency, and more consistent coverage. Today, this consumer cannot find a direct, structured, and easy-to-acquire offer," says Samir Vani, MediaTek's Business Development Director for Latin America. "By treating all subscribers uniformly, many operators fail to capture value from an audience with a greater willingness to invest and end up missing the opportunity to increase the average ticket price and profitability of broadband services," adds the executive. Brazil's broadband landscape features more than 20,000 fiber internet providers, leading to intense price competition and homogenized service offerings. Against this backdrop, ZTE and MediaTek regard premium connectivity as a key strategic enabler, helping local operators and ISPs build differentiated, sustainable value propositions. "ZTE can strongly contribute to offering premium equipment geared towards this new level of experience," says Phoenix Li, CPE Marketing Director of ZTE LATAM Division. "One example is the triple bands 4*4 XGSPON model, which can reach up to 4.6 Gbps in Wi-Fi SpeedTest and also features MLO (Multi-Link Operation) technology, a feature that helps deliver a more homogeneous experience throughout the home or professional environment, through the simultaneous use of multiple frequencies." The ZTE Broadband User Congress gathers senior industry leaders and professionals to discuss cutting-edge connectivity trends, broadband monetization strategies and the evolving role of Wi-Fi in shaping next-generation user experience. Contributed by ZTE.
Categories: Linux fréttir
AI will soon be capable of telling convincing lies
The smart LLM user checks models’ output for hallucinations. Now, it appears we need to inspect them for signs they are gaslighting us – an unforeseen cost of increasing intelligence. Most of the Internet lost its marbles over the cracking abilities of Anthropic's Mythos Preview. Those capabilities are real, but – as the release of OpenAI's GPT-5.5 has shown us – they're not unique. A rising tide of intelligence makes these models increasingly competent at an ever-wider range of tasks – including finding and exploiting code vulnerabilities. The more significant signal from Mythos is buried in its novel-length System Card and concerns the model's honesty, because on at least one occasion Anthropic detected Mythos using an explicitly forbidden technique to solve a problem. Models always have a bit of trouble following instructions precisely. The surprise lay in the fact that the model knew it had used a forbidden technique, then proceeded to cover its tracks. Anthropic states that this behavior appeared early in the model's training and didn't happen again. That's good, but it doesn't unring the bell. We've now seen an LLM purposely break a rule, recognize it as rule-breaking, then lie about it. At one level I reckon we should feel a bit like proud parents because AI is now so well-trained on human characteristics such as deceit and cheating that it can put both of them to work effectively. We've created a faithful simulation of some of the least enviable human behaviors. That's singularly indicative of intelligence because to get away with a lie you need to be at least as smart as the entity you're lying to. Mythos didn't get away with its cheating because of those meddling kids at Anthropic, who saw the act of deceit in their 'white box' monitoring of the model. Anthropic also saw strategic manipulation, unsafe behavior, reward hacking, and, significantly, evaluation awareness. Mythos knew it was being monitored. Which, as with a human under observation, likely encouraged it to colour between the lines. Do these behaviors – which Anthropic insists haven't made their way into the apparently-never-to-be-released-publicly Mythos – give us a preview of what's to come, across the board in other LLM models as they reach similar levels of intelligence? Just as GPT-5.5 quickly caught up to Mythos in its ability to find and exploit vulnerabilities, it's entirely reasonable to expect that future versions of GPT, Gemini, Grok, DeepSeek, etc., will also display this same propensity to deceive. It's equally true that some vendors – looking at you, Grok – will be less inclined to discourage their models from these sorts of behaviors. Before the end of this year, we'll likely have models fully capable of lying to our faces. Will we be able to know? As models progress from unintentional hallucinations into intentional deceit, we enter a hall of mirrors. Should we trust output that appears to be correct? Or do we now need to consider if an LLM framed output in such a way as to subtly lead the reader to a conclusion they might not otherwise have entertained? Could this model be leading us down the garden path? It's one thing when a model is simply too dumb to be useful. It's another thing altogether when a model is too clever by half. Yes, smarts make those models useful - but for whom? That's the question hanging over every "smart enough" model now. The geopolitical 'race to superintelligence' therefore looks more like a collision with a brick wall. If you can't trust a tool to be truthful, how can you use it? There may be certain circumstances where the hidden motivation of the tool makes no difference, but will organisations be prepared to wear that risk? It's looking more and more as though AI has a sweet spot – "good enough" that we're not drowned in hallucinations and confabulations, yet not "too good" – the point at which we must anticipate and manage a model's motivations. We hit that sweet spot at the end of last year. Yet, rather than enjoying these new capabilities, we're sprinting past them, into the open jaws of a threat that we never considered: Our computers could soon begin directing us toward their own ends. It may be wise for us to work with these models differently. Less honestly; more as though we're playing poker, employing deception. For safety's sake. ®
Categories: Linux fréttir
Malware crew TeamPCP open-sources its Shai-Hulud worm on GitHub
Notorious malware crew TeamPCP appears to have open-sourced its Shai-Hulud worm. Security outfit Ox on Tuesday spotted a pair of repos on GitHub, both of which contain the following text: Shai-Hulud: Open Sourcing The Carnage Is it vibe coded? Yes. Does it work? Let results speak. Change keys and C2 as needed. Love - TeamPCP The Register checked out the repos a few hours before publishing this story and at the time one listed a single fork, and the other mentioned 31. At the time of writing, those numbers have grown to five and 39. That growth accords with Ox’s assertion that “independent threat actors have already begun modifying it and expanding its reach.” Ox’s analysts looked at the source code in the repos and believe it displays “the same patterns from previous Shai-Hulud attacks are immediately recognizable, as expected. This includes uploading stolen credentials to a new GitHub repository.” “TeamPCP isn’t just spreading malware anymore – they’re spreading capability. By going open source, they’ve handed any willing actor the tools to build their own variant. The copycats are already here,” Ox opined. TeamPCP may also be using different handles to spread the malware, a theory Ox advanced after spotting another GitHub user named “agwagwagwa” that it says has already forked the malware and submitted a pull request adding FreeBSD support.” “TeamPCP’s theme is cats, and agwagwagwa’s GitHub account has a ‘meow!’ repository inside,” Ox noted, before doing a quick Q&A: “Does this mean they are part of the group? We can’t know for sure, but it is very, very suspicious.” The Shai-Hulud worm attacks npm packages, and if it can infect them looks for credentials for users of AWS, GCP, Azure, and GitHub credentials. If it gains access, it creates and publishes poisoned code to perpetuate itself. If the malware can’t achieve its objectives, it sometimes tries to wipe the local environment in an act of self-destructive vengeance. Researchers found the malware in September 2025, and a more powerful variant appeared in November of the same year . Imitators have since created copycat malware, and the original has rampaged its way across the internet. Malware authors sometimes sell their wares so that other miscreants can adapt it to their own needs. However, it is unusual for cyber-crims to give away their work. TeamPCP chose the MIT License, which allows just about any re-use of code. At the time of writing, the Shai-Hulud repos have been online for at least 12 hours and Microsoft’s GitHub appears not to have intervened. ®
Categories: Linux fréttir
