TheRegister

Subscribe to TheRegister feed
Articles from www.theregister.com
Updated: 1 hour ago

dBase debased: Database titan fades to black after 47 years

1 hour 26 min ago
It looks like a popular blog post about the decline and fall of dBase has knocked the long-moribund database's website offline. Sic transit gloria mundi? We were rather entertained by a recent blog post on "Delphi Nightmares" mourning the passing of the online store for the dBase website: dBase: 1979-2026. When the post went up, the online shop at store.dbase.com was still online, but since the post was shared on Hacker News yesterday, even that has gone. One could say that after 47 years, dBase has finally been debased. It's an interesting telling of the decline and fall of what was once an industry titan, and for us, the disappearance of the site itself once the blog post went up is just the cherry on top. Indirectly, what turned into dBase started out as a tool called JPLDIS, written for the Jet Propulsion Laboratory's three Univac 1108 computers. A FORTRAN rewrite of the simpler Tymshare RETRIEVE [PDF] tool, it was started by Jack Hatfield and finished by Jeb Long. C. Wayne Ratliff then rewrote it in Intel 8080 assembly language for PTSDOS on his IMSAI 8080, and tried to sell it under the name Vulcan: he put an advert in BYTE Magazine, offering it for $50. It wasn't a hit, as he recounted in an interview with Susan Lammers. Serial entrepeneur Ed Tate hired him and licensed Vulcan. Tate set up a new company called Ashton-Tate – there was no Ashton, but he later bought a parrot, named it Ashton and made it the mascot. Ashton-Tate renamed the database to dBASE II – to sound more mature – raised the price dramatically, and sold the CP/M version as shrink-wrap software.The late John Walker noted in 1982 that it was "selling like hotcakes at $800 a pop." That same year, a PC version of dBase II became one of early commercial business applications for IBM's new PC. Former dBase Developer's Bulletin editor Jean-Pierre Martel's personal history of dBASE recounts how it remained one of the industry-standard apps throughout the 1980s. In 1984, the enhanced dBase III did even better, followed in 1986 by dBase III+, with a menu-driven UI as well as the infamous "dot prompt" command line. In 1988, dBase IV followed, but didn't include the promised compiler for the dBase programming language. This opened up opportunities for rivals. Nantucket's Clipper was one, which could compile dBase code into applications. It was already out there: because it didn't include the interactive language, that meant it didn't have the same primary UI, which protected it from being sued. Clipper ended up acquired by Computer Associates. Fox Software's FoxBase, later FoxPro, was another, and even Ratliff himself was impressed. Microsoft eventually acquired FoxPro. There were many others, and that was the real program for Ashton-Tate and the dBase product: its programming language became standardized, and because of trademark issues, known as xBase. Even before the era of "open source," there was a DOS shareware app called WAMPUM, which is still out there. There are a number of FOSS implementations, including Harbour and its fork xHarbour. The Harbour GitHub repo has seen some activity this year, and the xHarbour one some too. Once your expensive proprietary app's file format and programming language escape into the wild and become partially standardized, that can make it hard to keep making money from it. It looks like that finally spelled the end for dBase LLC… but in the meantime, the xBase language is alive and reasonably well considering its advanced age for a bit of software. ®
Categories: Linux fréttir

This browser add-in doesn't just hide ads, it tells you to OBEY

2 hours 10 min ago
A fork of uBlock Origin Lite doesn't just remove the ads from web pages; it replaces them with tiles containing slogans from John Carpenter's 1988 film They Live. Published by Australian Dave Lawrence, the Chromium add-in (so it'll work in browsers such as Chrome and Edge) takes the uBlock Origin Lite content blocker (also known as uBO Lite) and tweaks it so that rather than simply hiding the ads, the ads are replaced with white boxes containing slogans from the movies. Lawrence listed them: "OBEY, CONSUME, WATCH TV, SLEEP, SUBMIT, CONFORM, STAY ASLEEP, BUY, WORK, NO INDEPENDENT THOUGHT, DO NOT QUESTION AUTHORITY." But sadly, nothing along the lines of "THIS AD IS HERE SO YOU DON'T HAVE TO PAY TO KEEP THIS SITE RUNNING." "Each blocked ad gets a single phrase, picked at random from the list," Lawrence explained in the project's repository. The uBlock Origin project is not involved, and Lawrence noted that only ads blocked by cosmetic filters get the They Live treatment. Custom user-defined cosmetic filters still hide ads normally. They Live is a science-fiction horror film in which the protagonist dons a pair of glasses that allow him to see the world as it truly is: run by ghoulish aliens using subliminal messaging to keep the population under control. Lawrence used Claude Code to add the They Live mode to the ad blocker, which might worry some, given concerns in some parts of the open source world about projects drowning in a tsunami of AI slop. However, Lawrence is upfront about the use of AI coding tools, and the add-in is certainly amusing. Ad blocking is a controversial subject. Google's changes to its browser extension architecture (dubbed Manifest v3) were expected to make content-blocking and privacy extensions less effective, but the reality turned out differently. The proprietary browser extension, Pie Adblock, also came under fire last year for allegedly lifting code and text from uBlock Origin, in violation of the latter's GPLv3 license. The license for Lawrence's fork is also GPLv3, the same as upstream uBlock Origin/uBO Lite. ®
Categories: Linux fréttir

SAP U-turn brings AI features to ECC and on-prem S/4HANA

2 hours 53 min ago
SAP has navigated an apparent U-turn to bring AI to its legacy and on-prem ERP systems, as predicted in The Register earlier this year. The German software giant had been steadfast that it would not introduce "innovation" such as AI for on-prem systems, including its legacy ERP platform ECC, a move that outraged some members of the user community. In July 2023, CEO Christian Klein told investors SAP's "newest innovations and capabilities" would only be delivered in the public or private cloud using RISE with SAP, the vendor's lift-shift-and-transform program launched with partners and cloud providers in early 2021. "This is how we will deliver these innovations with speed, agility, quality and efficiency. Our new innovations will not be available for on-premise or hosted on-premise ERP customers on hyperscalers," he said at the time. However, during SAP's Sapphire conference in Orlando this week, Klein said there was "no confusion at all." New tech, such as the AI agents built on the SAP Joule platform, would be available to customers on-prem, so long as they had signed up for a cloud "journey," he said. "The majority of our Joule assistants and agents [will be available] also on-prem, on ECC, and S/4HANA for customers that have already committed the majority of the landscape to the journey, as an interim solution so that they can benefit from AI while they are modernizing. That's absolutely the right thing to do, and I'm excited to see customers taking advantage of this." In March, Alisdair Bach, head of SAP practice at consultancy Dragon ERP, predicted that SAP AI agents based on Joule would become available for on-prem systems after The Register revealed SAP's plan for migrating customers to the cloud was €2 billion off target in terms of declining on-prem support revenue. Muhammad Alam, executive board member for product and engineering, told the conference SAP would bring "a significant percentage of Joule assistants and agents to work in hybrid landscapes with the ability to connect to your S/4 on-premises and ECC landscape." He said the offer would be available to "customers that have started their modernization journey on RISE with SAP." The new capability will be available in a bundle of services under the new Business AI Platform banner, he said. "We've done this so you can start generating value from AI today on the SAP Business AI Platform while you're modernizing your estate." It will only be available to customers signing up for the Max Success Plan, a commercial deal. The company announcement said the plan would enable customers to fast-track assistant and agent activation, and allow "customers to adopt AI, including cloud and eligible on-premises systems, at their own pace as they move to the cloud." General availability is planned for May 2026. ®
Categories: Linux fréttir

ZTE advances intelligent network monetization strategy at AGC2026, empowering ISPs for sustainable growth

3 hours 36 min ago
Partner Content ZTE Corporation (0763.HK / 000063.SZ), a global leading provider of integrated information and communication technology solutions, participated in the ABRINT Global Congress 2026, presenting a comprehensive and forward-looking portfolio of solutions tailored to internet service providers (ISPs), telecom operators, and enterprise customers in Brazil. The event, held from May 6 to 8 in São Paulo, served as a strategic platform for ZTE to demonstrate its vision for intelligent broadband monetization and digital infrastructure evolution. Aligned with the ongoing digital transformation of Brazil, ZTE is committed to enabling customers to enhance revenue generation capabilities, accelerate network modernization, and strengthen the foundation for sustainable growth. According to Leo Lu, Vice President of ZTE, President of ZTE Brazil, Brazil continues to represent one of the most dynamic and strategically important fixed broadband markets globally. "Brazil remains a highly dynamic broadband market with strong growth potential. ZTE has established a solid and long-term presence in the country, working closely with operators and over one thousand ISPs to promote large-scale FTTx deployment and industry advancement," he stated. A central pillar of ZTE's strategy is to support ISPs in transitioning from traditional connectivity providers to integrated digital service providers. This transformation is driven by expanding FTTx capabilities into service-oriented and experience-centric offerings. In this context, ZTE highlights its FTTR-B (Fiber to the Room for Business) solution, designed to address enterprise scenarios such as SMEs, commercial environments, and industrial parks. In parallel, ZTE provided intelligent experience management and precision marketing solutions based on user profiling and behavioral analytics, enabling operators to enhance customer engagement, improve retention, and increase ARPU. ZTE continues to advance core broadband technologies, including 10G PON, FTTH, and fully optical networks, establishing a robust, high-performance, and future-ready network foundation. "ZTE remains focused on driving the evolution of fixed broadband networks by promoting high-speed, stable, and sustainable infrastructure capabilities," added Leo Lu. Enhancing Network Efficiency and Infrastructure Modernization In the transport domain, ZTE introduced its Light OTN solution, designed to address the needs of cost-sensitive operators while maintaining high efficiency and scalability. By integrating optical and service layers into a compact architecture, the solution enables simplified deployment, plug-and-play operation, and flexible capacity expansion aligned with traffic growth. Within the IP network domain, ZTE presented a converged architecture aimed at streamlining multi-layer network operations, reducing O&M complexity, and accelerating service rollout. The solution supports automated service migration and is designed for long-term evolution, including readiness for 800GE. Expanding Capabilities: Wi-Fi 7, Computing, and Energy Integration ZTE also showcased its latest Wi-Fi 7 solutions, addressing both residential and enterprise application scenarios, alongside advanced server infrastructure and edge data center solutions. Leveraging local manufacturing and supply chain capabilities in Brazil, ZTE is able to optimize delivery efficiency, reduce operational costs, and strengthen local ecosystem support. These capabilities enable ISPs to expand into higher-value services, including cloud computing, localized content delivery, video services, and AI-driven applications. "ZTE is accelerating its strategic transition from traditional connectivity toward 'connectivity + computing', empowering customers to evolve into comprehensive digital service providers," emphasized Leo Lu. In addition, ZTE presented integrated energy solutions, including power systems and energy storage for telecom sites, data centers, and edge nodes. These solutions combine photovoltaic generation, energy storage, and intelligent energy management, contributing to improved energy efficiency, cost optimization, and operational resilience. Strengthening Industry Collaboration and Future Vision During the event, ZTE hosted an industry engagement session to exchange insights with ecosystem partners, focusing on key themes such as ISP business transformation, revenue growth through FTTR-B, cloud and computing integration, Wi-Fi 7 deployment, and cost optimization through lightweight network architectures and energy-efficient solutions. Looking ahead, ZTE reaffirmed its long-term commitment to the Brazilian market and its role in supporting the development of next-generation digital infrastructure. "ZTE will continue to deepen its presence in Brazil, strengthen collaboration with industry partners, and jointly build fully optical, intelligent, and sustainable networks to support the development of the digital economy in the AI era," concluded Leo Lu. Contributed by ZTE.
Categories: Linux fréttir

Civil servants to protest outside Capita AGM over pension shambles

3 hours 40 min ago
Capita's annual general meeting next week is set to come with an unexpected item on the agenda: angry civil servants protesting over missing pensions, broken systems, and a data breach affecting pension scheme members. Members of the Public and Commercial Services (PCS) union will gather outside Capita's AGM at Sheldon Square, London, from 9:45am on May 18 to demand that the government strips the outsourcer of responsibility for administering civil service pensions after months of delays, botched portal launches, missing payments, bereavement failures, and a data breach that exposed members' personal information. PCS said Capita's handling of the scheme has left "thousands of retired civil servants without their pensions," while bereaved spouses face long waits for payments and future retirees are left worrying whether their income will materialize at all. The protest is the latest twist in what has become one of Whitehall's messiest outsourcing debacles. Capita took over administration of the Civil Service Pension Scheme in December under a £239 million contract covering around 1.5 million current and former civil servants – and things went sideways almost immediately. Users of the new pension portal were quick to complain about login failures, broken links, and unfinished-looking pages after the launch. MPs later heard the system went live without full functionality in place and struggled to handle the volume and complexity of cases transferred from the previous administrator, MyCSP. PCS said delays affected around 8,500 newly retired civil servants, while Capita said it inherited an 86,000-case backlog from MyCSP, many already overdue. The problems did not stop at missing pensions. In April, Capita confirmed that a flaw in the system briefly exposed pension data for other members for about 35 minutes, affecting 138 people. The breach prompted scrutiny from the Information Commissioner's Office and further fury from unions already accusing the company of turning the pension scheme into a slow-motion catastrophe. PCS general secretary Fran Heathcote previously described the situation as a "fiasco" and argued each fresh failure strengthened the case for bringing critical public services back in-house rather than handing them to contractors. Capita, meanwhile, continues to insist that inherited backlogs and unexpected case complexity contributed to the mess, while government officials acknowledged that performance fell well below expectations after go-live. Capita refused to comment. ®
Categories: Linux fréttir

ZTE hosts 2026 Broadband User Congress in São Paulo, under the Theme "Monetize Your Intelligent Broadband"

3 hours 44 min ago
Partner Content ZTE Corporation (0763.HK / 000063.SZ), a global leading provider of integrated information and communication technology solutions, successfully hosted the fifth edition of its Broadband User Congress, themed "Monetize Your Intelligent Broadband", in São Paulo, Brazil. From Colombia at the foot of the Andes, to Mexico as a North American hub, and to Brazil as Latin America's largest market, ZTE continues to explore new pathways in broadband business across the region. The event brought together more than 300 senior executives from leading ISPs, operators, local government, industry associations as well as industry experts and ecosystem partners across Brazil and Latin America Driven by the dual engines of broadband network cost reduction and efficiency improvement as well as home service innovation, the congress presented a comprehensive suite of intelligent broadband monetization solutions tailored for segmented scenarios in Latin America. These offerings help operators and ISPs break away from traditional pipe-only business models, boost basic network ARPU, diversify revenue sources and reshape core industry competitiveness. Fang Hui, Senior Vice President of ZTE, delivered a keynote address at the conference and stated: "With over 20 years of presence in Latin America, ZTE has delivered hundreds of landmark projects and served over 100 million users in Brazil. Moving forward, we will drive premium user experiences through technological innovation, unlock growth potential through win-win partnerships, and empower industrial transformation with AI. Leveraging our full-stack 'Connectivity + Computing' capabilities, we are committed to building an open, intelligent digital ecosystem and co-creating new value with Latin American partners." At the event, targeting operator and ISP markets, ZTE showcased innovative network operation concepts and upgraded product and solution portfolios, translating the strategic vision of "Monetize Your Intelligent Broadband" into deployable and profitable commercial practices. For operator broadband network construction, ZTE is committed to building the best-in-class broadband network for every customer. In the access network field, ZTE's FTTx monetization solutions lower network deployment barriers via lightweight OLTs, adopt CEM+AI to enable precise quality analysis and targeted marketing, and expand into the B2B blue ocean market with innovative products including AI all-optical campus and AI Interactive Flat Panel, balancing cost reduction, efficiency improvement and ARPU growth, continuously unlocking incremental network value. Together with the end-to-end intelligent ODN system, they provide strong assurance for the full lifecycle of the network. In the transport network field, ZTE launched C+L full-band 1.6T OTN solution enhanced with AI. It delivers breakthroughs in single-wavelength rate, spectrum efficiency and intelligent O&M, addressing challenges of scaling, cost reduction and agile service delivery. Meanwhile, ZTE launched single-slot 28.8 Tbps core router together with high-performance 100GE/400GE aggregation routers. With AI traffic optimization, AI security protection and AI dynamic energy saving, these products enable operators to build next-generation IP networks that are ultra-broadband, green, secure and intelligent, laying a solid foundation for broadband value upgrade. In smart O&M, the AIOps platform significantly improves fault diagnosis efficiency. It reduces OPEX and drives evolution toward L4 autonomous networks. For home broadband, ZTE leverages strong technology and continuous AI innovation to ensure optimal TCO. Quality assurance, intelligent control, and long-term evolution are the foundation of this approach. In smart connectivity, AI Wi-Fi 7 serves as the core. It leverages advantages in specifications, coverage, control, hardware and software as well as supply chain, helping operators optimize TCO and achieve precise selection. Customized package strategies enable operators to move beyond price-driven competition and build differentiated competitiveness. In smart home value-added services, large-model capabilities empower AI O&M, AI cameras and AI Smart View. These create new home experiences that integrate smart control, fitness, entertainment and security, driving the shift from basic connectivity to high-value services. In smart operations, the SCP platform enables unified management of all home devices and supports remote diagnostics, one-click optimization, stolen-device locking and targeted VAS marketing. It reduces O&M costs, stabilizes revenue and helps operators build efficient systems for sustainable monetization. ZTE empowers ISPs to increase revenue by enabling lightweight deployment and converged efficiency. Light PON enables fast and cost-effective network deployment, shortening time-to-market and helping ISPs seize early opportunities. Light OTN features a 12.8T-in-2U high-density design with minimalist WebGUI management, supporting single-wavelength 1.6T transmission, zero-touch deployment and plug-and-play activation, reducing deployment costs and O&M complexity while ensuring optimal TCO. Light IP Network provides end-to-end lightweight IP convergence from CPE to access, aggregation and backbone. Built on unified open architecture, smooth product evolution and AI-powered minimalist O&M, it enables heterogeneous network integration and safeguards sustainable ISP operations. Beyond the above scenario-based solutions, the event showcased three major highlight partnerships. ZTE launched the new-generation TV 3.0 set-top box, marking a new phase in Brazil's digital TV upgrade. ZTE and MediaTek jointly launched Wi-Fi 7 and 10G PON solutions tailored for premium home and small-and-medium business scenarios, enabling operators to tap high-value user groups. Furthermore, Qualcomm and ZTE are working together to shape the next generation of networking infrastructure for the AI Era. This collaboration brings together Qualcomm's AI‑native Wi-Fi and FWA platforms and ZTE's leadership in access and networking solutions. Lu Maoliang, President of ZTE Brazil, commented that Brazil serves as a core strategic market in ZTE's global layout. With 25 years of localized operation in Latin America, ZTE has provided services for over 100 operators and ISPs, has deployed more than 60,000 kilometers of optical fiber, and has reached over 30 million household users across the region. He noted that the congress precisely addresses local clients' demands, aiming not only to deliver leading technologies and products, but also to focus on driving sustainable commercial success for customers. Looking ahead, guided by the vision of "Monetize Your Intelligent Broadband", ZTE will further deepen its footprint in Brazil and the broader Latin American market. Leveraging its full-stack "Connectivity + Computing" technological capabilities, ZTE will partner with local operators and ecosystem partners to drive the evolution of intelligent broadband from network coverage expansion to value-based operation. The company will facilitate high-quality, sustainable growth of the regional communications industry and jointly build a new blueprint for the development of Brazil's digital economy. Contributed by ZTE.
Categories: Linux fréttir

ZTE and MediaTek unveil Tri-band Wi-Fi 7, targeting a relatively unexplored premium niche in Brazil

4 hours 19 min ago
Partner Content ZTE Corporation (0763.HK / 000063.SZ), a global leading provider of integrated information and communication technology solutions, and MediaTek, a global semiconductor company and leader in the smartphone processor market, unveiled a joint strategy at the 2026 ZTE Broadband User Congress. The two parties will expand premium connectivity product portfolios tailored for high-demand residential users and small businesses in Brazil, addressing their needs for advanced technology, comprehensive coverage, ultra-high speed and low-latency network performance. During the meeting, the companies presented the benefits of tri-band Wi-Fi 7, which operates simultaneously on 2.4 GHz, 5 GHz, and 6 GHz. The adoption of the 6 GHz band, in addition to the already established bands, reinforces the gain in capacity and stability, improving the experience in both homes and small businesses, especially in scenarios with multiple connected devices and higher density of Wi-Fi networks. "In Brazil, there is a clear niche of consumers and businesses that need an above-average connectivity experience, with higher performance, lower latency, and more consistent coverage. Today, this consumer cannot find a direct, structured, and easy-to-acquire offer," says Samir Vani, MediaTek's Business Development Director for Latin America. "By treating all subscribers uniformly, many operators fail to capture value from an audience with a greater willingness to invest and end up missing the opportunity to increase the average ticket price and profitability of broadband services," adds the executive. Brazil's broadband landscape features more than 20,000 fiber internet providers, leading to intense price competition and homogenized service offerings. Against this backdrop, ZTE and MediaTek regard premium connectivity as a key strategic enabler, helping local operators and ISPs build differentiated, sustainable value propositions. "ZTE can strongly contribute to offering premium equipment geared towards this new level of experience," says Phoenix Li, CPE Marketing Director of ZTE LATAM Division. "One example is the triple bands 4*4 XGSPON model, which can reach up to 4.6 Gbps in Wi-Fi SpeedTest and also features MLO (Multi-Link Operation) technology, a feature that helps deliver a more homogeneous experience throughout the home or professional environment, through the simultaneous use of multiple frequencies." The ZTE Broadband User Congress gathers senior industry leaders and professionals to discuss cutting-edge connectivity trends, broadband monetization strategies and the evolving role of Wi-Fi in shaping next-generation user experience. Contributed by ZTE.
Categories: Linux fréttir

AI will soon be capable of telling convincing lies

4 hours 41 min ago
The smart LLM user checks models’ output for hallucinations. Now, it appears we need to inspect them for signs they are gaslighting us – an unforeseen cost of increasing intelligence. Most of the Internet lost its marbles over the cracking abilities of Anthropic's Mythos Preview. Those capabilities are real, but – as the release of OpenAI's GPT-5.5 has shown us – they're not unique. A rising tide of intelligence makes these models increasingly competent at an ever-wider range of tasks – including finding and exploiting code vulnerabilities. The more significant signal from Mythos is buried in its novel-length System Card and concerns the model's honesty, because on at least one occasion Anthropic detected Mythos using an explicitly forbidden technique to solve a problem. Models always have a bit of trouble following instructions precisely. The surprise lay in the fact that the model knew it had used a forbidden technique, then proceeded to cover its tracks. Anthropic states that this behavior appeared early in the model's training and didn't happen again. That's good, but it doesn't unring the bell. We've now seen an LLM purposely break a rule, recognize it as rule-breaking, then lie about it. At one level I reckon we should feel a bit like proud parents because AI is now so well-trained on human characteristics such as deceit and cheating that it can put both of them to work effectively. We've created a faithful simulation of some of the least enviable human behaviors. That's singularly indicative of intelligence because to get away with a lie you need to be at least as smart as the entity you're lying to. Mythos didn't get away with its cheating because of those meddling kids at Anthropic, who saw the act of deceit in their 'white box' monitoring of the model. Anthropic also saw strategic manipulation, unsafe behavior, reward hacking, and, significantly, evaluation awareness. Mythos knew it was being monitored. Which, as with a human under observation, likely encouraged it to colour between the lines. Do these behaviors – which Anthropic insists haven't made their way into the apparently-never-to-be-released-publicly Mythos – give us a preview of what's to come, across the board in other LLM models as they reach similar levels of intelligence? Just as GPT-5.5 quickly caught up to Mythos in its ability to find and exploit vulnerabilities, it's entirely reasonable to expect that future versions of GPT, Gemini, Grok, DeepSeek, etc., will also display this same propensity to deceive. It's equally true that some vendors – looking at you, Grok – will be less inclined to discourage their models from these sorts of behaviors. Before the end of this year, we'll likely have models fully capable of lying to our faces. Will we be able to know? As models progress from unintentional hallucinations into intentional deceit, we enter a hall of mirrors. Should we trust output that appears to be correct? Or do we now need to consider if an LLM framed output in such a way as to subtly lead the reader to a conclusion they might not otherwise have entertained? Could this model be leading us down the garden path? It's one thing when a model is simply too dumb to be useful. It's another thing altogether when a model is too clever by half. Yes, smarts make those models useful - but for whom? That's the question hanging over every "smart enough" model now. The geopolitical 'race to superintelligence' therefore looks more like a collision with a brick wall. If you can't trust a tool to be truthful, how can you use it? There may be certain circumstances where the hidden motivation of the tool makes no difference, but will organisations be prepared to wear that risk? It's looking more and more as though AI has a sweet spot – "good enough" that we're not drowned in hallucinations and confabulations, yet not "too good" – the point at which we must anticipate and manage a model's motivations. We hit that sweet spot at the end of last year. Yet, rather than enjoying these new capabilities, we're sprinting past them, into the open jaws of a threat that we never considered: Our computers could soon begin directing us toward their own ends. It may be wise for us to work with these models differently. Less honestly; more as though we're playing poker, employing deception. For safety's sake. ®
Categories: Linux fréttir

Malware crew TeamPCP open-sources its Shai-Hulud worm on GitHub

5 hours 46 min ago
Notorious malware crew TeamPCP appears to have open-sourced its Shai-Hulud worm. Security outfit Ox on Tuesday spotted a pair of repos on GitHub, both of which contain the following text: Shai-Hulud: Open Sourcing The Carnage Is it vibe coded? Yes. Does it work? Let results speak. Change keys and C2 as needed. Love - TeamPCP The Register checked out the repos a few hours before publishing this story and at the time one listed a single fork, and the other mentioned 31. At the time of writing, those numbers have grown to five and 39. That growth accords with Ox’s assertion that “independent threat actors have already begun modifying it and expanding its reach.” Ox’s analysts looked at the source code in the repos and believe it displays “the same patterns from previous Shai-Hulud attacks are immediately recognizable, as expected. This includes uploading stolen credentials to a new GitHub repository.” “TeamPCP isn’t just spreading malware anymore – they’re spreading capability. By going open source, they’ve handed any willing actor the tools to build their own variant. The copycats are already here,” Ox opined. TeamPCP may also be using different handles to spread the malware, a theory Ox advanced after spotting another GitHub user named “agwagwagwa” that it says has already forked the malware and submitted a pull request adding FreeBSD support.” “TeamPCP’s theme is cats, and agwagwagwa’s GitHub account has a ‘meow!’ repository inside,” Ox noted, before doing a quick Q&A: “Does this mean they are part of the group? We can’t know for sure, but it is very, very suspicious.” The Shai-Hulud worm attacks npm packages, and if it can infect them looks for credentials for users of AWS, GCP, Azure, and GitHub credentials. If it gains access, it creates and publishes poisoned code to perpetuate itself. If the malware can’t achieve its objectives, it sometimes tries to wipe the local environment in an act of self-destructive vengeance. Researchers found the malware in September 2025, and a more powerful variant appeared in November of the same year . Imitators have since created copycat malware, and the original has rampaged its way across the internet. Malware authors sometimes sell their wares so that other miscreants can adapt it to their own needs. However, it is unusual for cyber-crims to give away their work. TeamPCP chose the MIT License, which allows just about any re-use of code. At the time of writing, the Shai-Hulud repos have been online for at least 12 hours and Microsoft’s GitHub appears not to have intervened. ®
Categories: Linux fréttir

Man jailed for packing printer with something more expensive than toner: Cocaine

6 hours 47 min ago
An Australian court has sentenced a man to nine years in jail for smuggling cocaine inside printers. According to a post from Australia’s Border Force and Federal Police, in 2017, “officers intercepted a consignment of five printers … locating 10 packages of compressed white powder concealed within their paper trays.” Initial tests suggested the substance was cocaine – 22.4kg of it – so Border Force swapped it out for another material and then shipped the package to its intended destination. Four men picked up the printers, at which point authorities swooped. The gears of justice can grind slowly in Australia, so the matter didn’t reach court for years. One of the accused was found not guilty. In 2022, another received a ten-year sentence. Another got the same term last year. The fourth man – who Australian authorities have described as a “syndicate member” – fronted up before a judge in 2024 and learned his fate last week when the Victorian County Court sentenced him to nine years, with a four-and-a-half-year non-parole period. Drug smugglers down under seem quite fond of computing hardware: In 2014 we reported that authorities found laser printer toner cartridges full of methamphetamine and charged a woman over the matter. And in 2024 we spotted news of tower PC cases brought across the border with 100kg of meth inside. Again, Border Force spotted the drugs at the border, then staked out the recipient before swooping in to make an arrest. Australian government data suggests cocaine retails for AU$300-$400 per gram ($215 to $290), and methamphetamine for around AU$50 ($35). Cartridges for your correspondent’s color laser printer cost AU$139 ($100) apiece and a third party toner refill vendor sells 45 grams of the colored dust for just $8.40. The potential profits are nothing to sniff at. ®
Categories: Linux fréttir

Vietnam to develop domestic cloud so it can ditch risky overseas operators for government workloads

8 hours 25 min ago
Vietnam has decided to develop its own cloud platform, so its government agencies can stop using foreign-owned services. Prime Minister Le Minh Hung last week announced the plan in Decision 808/QD-TTg, which lists 20 strategic technologies Vietnam wants to develop to improve its technological self-reliance and give its government the tools to tackle national challenges. Developing a national cloud computing platform is number 13 on the list. Machine translation of Decision 808 yields the following goals for the project: “Ensuring national data sovereignty and cybersecurity for the digital government and key digital economic infrastructures; forming a centralized, secure, and reliable digital and data infrastructure to serve national digital transformation; gradually replacing foreign cloud services in state agencies, reducing the risk of data leaks and breaches of state secrets.” The move is a sign that Vietnam’s government, like many others, fears entanglements with cloud providers that may struggle to escape edicts from their home jurisdictions. Yet major hyperscalers Microsoft, Google, and Tencent Cloud are yet to build facilities in Vietnam. AWS will bring one of its lightweight Local Zones to Hanoi, Alibaba Cloud intends to build a datacenter, and Huawei Cloud has expressed interest in doing likewise. Vietnam’s government wants more love from hyperscalers – the nation’s Deputy PM recently met with AWS officials and called for greater co-operation. Yet any Vietnamese government workloads currently operating in a major hyperscaler violate the nation’s own laws that require local storage of personal information! Other technologies Vietnam wants to develop include a large-scale Vietnamese language model, virtual assistants, and AI to power applications including cameras, credit risk management, and something that translates as “a national smart education platform applying controlled AI.” The nation also wants its own next-generation firewall; anti-malware software, a next-generation SIEM system, and an “AI-integrated security operations center platform.” Quantum-resistant encryption also makes the list, as does a “user and entity behavior analysis system.” Rare earth processing is another capability Vietnam desires, as are 5G expertise, the ability to build and operate autonomous and industrial robots, and improved semiconductor design skills. Vietnam is in a hurry: Decision 808 set a 2030 deadline to get this all done. According to a Tuesday post to a government news platform, 2030 is also the year in which Hanoi expects all core government services will be online, and digital infrastructure enables outcomes such as “Ensuring social welfare and supporting crime prevention and control, national security, and social order and safety” plus “Supporting scientific research and innovation.” And in 2035, Vietnam “will become a developed digital nation” in which “National databases, with population data serving as the core, will be interconnected, shared, and effectively utilized to support the development of a smart government, enabling data-driven decision-making based on real-time information.” Smart government will mean “Citizens will benefit from personalized, automated, and convenient digital services tailored to different life events.” What a time to be alive. ®
Categories: Linux fréttir

Execs admit AI makes them value human workers less

9 hours 43 min ago
Executives have leaned in to AI, only to stumble before reaching any return on their investment. "Most AI spending has under-delivered, leaving execs feeling like they’re burning cash," says employment biz G-P (Globalization Partners) in its third annual AI at Work Report. The report finds corporate leaders' enthusiasm for AI waning as ROI proves elusive. Sixteen percent of companies saw a negative ROI from AI investments last year, and 73 percent of executives whose AI efforts did pay off said ROI fell short of expectations, according to the report. These findings are based on a survey of 2,850 executives (VP level and up) in the US, Germany, Singapore, Australia, and France, including a separate set of 500 US HR professionals. The AI at Work Report is a little cheerier than last year's findings from MIT NANDA researchers who discovered only five percent of organizations have managed to successfully put AI projects into production. Regardless, execs anticipate scaling back their AI budgets if organizational goals aren't met this year. Beyond their worries about financial benefits, corporate execs in the G-P survey have doubts about the reliability of AI, a concern borne out by recent Microsoft research. Only 23 percent of the G-P respondents said they have total confidence in AI accuracy. Those concerns mean 69 percent said they spend more time monitoring and reviewing AI, while 61 percent expressed concerns about using AI to craft sensitive documents because they doubt the output is legally accurate. Moral unease doesn't appear to be doing much to help corporate leaders empathize with workers, however. The survey found that "82 percent of executives admit AI has lowered the value they place on human employees." In fact, these leaders appear to have become somewhat suspicious of their people – about 88 percent expressed concern that employees are using AI performatively rather than adding business value. But among such misanthropic, skeptical managers, there's enough lingering humanity to ensure that only 12 percent strongly agree "that sacrificing employee privacy for AI monitoring is worth it to reach business goals." Despite the sense that AI has reduced how human workers are valued, about half of execs still cite the scarcity of employees with AI skills and the lack of data literacy as barriers to their AI goals. You still need human talent to stand up money-losing AI projects. ®
Categories: Linux fréttir

Doozy of a Patch Tuesday includes 30 critical Microsoft CVEs

Tue, 2026-05-12 23:51
Microsoft released fixes for 137 CVEs on Tuesday, none of which are known to have been targeted by attackers. But the news is not all good as Redmond rated a whopping 30 flaws as critical, with 14 earning a 9.0 or higher CVSS severity rating, including one perfect 10. Plus, everyone who celebrates the monthly patchapalooza event received validation for what we all widely suspected last month: Yes, Redmond (and everyone else, for that matter) is using AI to find a ton more bugs than ever before. And that means a lot more work for all the folks applying and testing the patches. “This month's release sits on the larger side of a hotpatch month, and we expect releases to continue trending larger for some time,” Tom Gallagher, VP of engineering at Microsoft Security Response Center, said in a note on this month's Patch Tuesday. Microsoft also said its secret-until-now AI bug hunting system, codenamed MDASH, found 16 of the vulnerabilities addressed in this month’s release. Redmond additionally announced it is making the tool available to a limited number of customers in private preview, along the lines of Anthropic’s Mythos and Project Glasswing. In other words: no break for Microsoft admins this May Patch Tuesday. Let’s take a look at some of the nastiest/most-interesting bugs that also received some of the highest-CVSS ratings this month, coming in hot at 9.8 and 9.9. First up: CVE-2026-41096. This one is a critical, 9.8-rated Windows DNS Client remote code execution (RCE), and while Redmond says exploitation is “unlikely,” we’d suggest patching it ASAP. It’s due to a heap-based buffer overflow, and no authentication or user interaction is needed to exploit it (it's done by sending a specially crafted DNS response to a vulnerable system), potentially leading to memory corruption and RCE. “Since the DNS Client runs on virtually every Windows machine, the attack surface is enormous,” Zero Day Initiative bug hunting boss Dustin Childs warned. “An attacker with a position to influence DNS responses (MitM, rogue server) could achieve unauthenticated RCE across your enterprise.” Plus, it could happen across a ton of enterprise systems very rapidly, Jack Bicer, Action1 vulnerability research director told The Register. “This CVE requires immediate attention,” he said. “Successful attacks may lead to widespread endpoint compromise, ransomware deployment, credential harvesting, and operational disruption across corporate networks.” Another especially bad bug, CVE-2026-42898 in Microsoft Dynamics 365 on-premises systems, achieved a near-perfect 9.9 CVSS rating and also leads to RCE. Any authenticated user can trigger this vuln - it doesn’t require admin or other elevated privileges. As Redmond explains: “An attacker with the required permissions could modify the saved state of a process session in Dynamics CRM and trigger the system to process that data, which could result in the server unintentionally executing malicious code.” Since exploitation could lead to a scope change, meaning the bug can affect systems beyond the vulnerable component, it’s a pretty serious risk to enterprises and should be prioritized. “Scope changes are pretty rare, so if you’re running Dynamics 365 On-Prem, definitely test and deploy this patch quickly,” Childs said. The second of two 9.8-rated bugs is CVE-2026-41089. It’s a stack-based buffer overflow in Windows Netlogon that allows an unauthenticated, remote attacker to execute code on vulnerable machines by sending a specially crafted network request to a Windows server acting as a domain controller. As Childs points out: the fact attackers can exploit this flaw without credentials or user interactions makes it wormable “This is the highest-impact bug that requires immediate patching: a compromised domain controller is a compromised domain,” he added. The silver lining this month for defenders is that the single CVE earning a perfect 10.0 CVSS rating is in Azure DevOps, and doesn’t require users to fix anything. CVE-2026-42826 is an information disclosure vulnerability in the DevOps toolchain “has already been fully mitigated by Microsoft,” according to Redmond. “There is no action for users of this service to take. The purpose of this CVE is to provide further transparency.” ®
Categories: Linux fréttir

Google users fight for refunds as unauthorized API usage bills soar

Tue, 2026-05-12 23:03
EXCLUSIVE Several Google Cloud customers say their API keys have been compromised and used by bad actors to run inferencing workloads using the most expensive video and picture models, leaving them with bills for tens of thousands of dollars and weeks of back-and-forth headaches with the Chocolate Factory as they tried to prove they were not responsible for the mess. The problem is being hashed out on social media, with sites like Reddit collecting stories from Google Cloud users that seem to follow a similar pattern: After months or years paying small monthly bills to Google Cloud for access to tools like Maps, their API keys are discovered, and in minutes they are charged thousands of dollars for API calls to Nano Banana and Veo 3. Google told The Register this is an industry-wide problem and not a security issue specific to Google. It said the vast majority of these incidents happen due to compromised user credentials such as API keys inadvertently leaked on public code repositories like GitHub, and malicious actors who are actively scraping public repositories. Google said it encourages all customers to implement robust security practices, including enabling multi-factor authentication, routinely auditing API keys, and ensuring credentials are never committed to public repositories. But those explanations are complicated by developers and security threat researchers who said there are thousands of accounts which are following Google's own site configuration rules by placing their APIs in a public client. Additionally, one user told The Register they had spending caps in place that should have stopped any bill over $250. Yet according to Google those caps can be automatically upgraded to $100,000 – without user input – if the user has spent a total of $1,000 throughout the life of the account, and the account is more than a month old. 'What the hell's going on?' Rod Danan is CEO of Prentus, a company that helps job applicants with interview preparation and tracks job placements for universities. He uses API calls to Google Maps as a part of his platform. For years his bill never topped $50 a month, he told The Register. Then in March he got an email alert from Google saying he was being charged $3,000 and panic took hold. “It’s just ‘Boom, we just charged you $3,000.’ I'm like, ‘What the hell's going on?’ And then you go into the application, like, ‘What is triggering this? What is the source?’ So just determining that is honestly not that simple,” he told The Register. “As I'm searching, five minutes go by and another $5,000 get charged. I’m like ‘What the hell is going on? It's just draining my money.’ ” Despite the spending caps he said he had in place, by the time he shut down the API minutes later, his credit card had been charged $10,138 almost entirely from Veo 3 video generation and Gemini image output tokens, which are services he has never used and have zero connection to his product. Google told him it found no evidence of fraud and has thus far refused to issue a refund. But what makes this especially frustrating for Danan is that he said he was following Google’s advice in exposing the API key in the first place. “You have this Google Maps key, which you know, everyone uses, and the guidance from Google is you're supposed to load it in your front end. So we did that, and all of a sudden they changed the keys so that the Google Maps key, which is exposed publicly, could be used for Gemini, and then they didn't disclose that to customers,” he said. “So then, all of a sudden, I just get multiple emails in a row. It's like $3,000, $5,000, $10,000 charged on your Google account.” In February, security researchers at Truffle Security Co. published an article warning Google users that their Maps API keys were no longer safe to share publicly. For years, if a coffee shop wanted to place its logo and website on Google Maps, the instructions from Google were to download the widget and upload an API key that linked their site to Google Maps, said Joe Leon, the threat researcher who wrote the warning. He told The Register that about three years ago, Google started allowing those same public API keys to also access Google Gemini models. “You have all these people that we’re told to like for Maps, ‘Put this key in public." Now maybe it's them, maybe it's someone else in their organization, someone enabled the Gemini API in that same project,” he told The Register. “Now that same key can be used to both access Maps, and also Gemini. That’s the core of what I found.” He said the first few characters of those API keys followed a particular naming convention: A-I-Z-A. A search of millions of web pages found 3,000 of those Google keys that were first deployed for Maps and are now able to access Gemini, leaving those sites vulnerable to high-dollar credential attacks. In an email to The Register, Google said it tells users not use the same API key for multiple APIs, and especially through API keys that could be client-facing (browser keys). It recommends to always apply API client restrictions – for example, to restrict the API key to a specific service and apply client application restrictions like “HTTP referrer”, “IP address” , “Android apps.” Google said it now mandates that users configure API restrictions when they create API keys. Additionally, the company said, it's no longer possible to create a key that can access both Gemini and Maps. Leon agrees that Google has taken steps to lock down access since his paper was published. “The first thing that I’ve seen is they’ve rolled out a new Gemini API key type, which is unrelated, as best I can tell, to the Google API key. So it’s prefixed with capital ‘A,’ capital ‘Q’ ” he said. “Since I published that post, they’ve taken a lot of steps to try to lock this down. The spending caps I saw, they put that in place. I didn’t know that they auto increase it. So that kind of defeats a little bit of the purpose.” About those spending caps Developer Isuru Fonseka, based in Sydney, Australia has been building apps in the Google Cloud environment for 10 years. He's got a side project he has been working on for about two years, but says he's never exposed the API key that he uses to access his work inside Firebase. Additionally, he set a hard budget cap at $250. Like Danan, he was alerted to a sudden spending spike with Google on April 29. The attack was so out of character with his purchase history that his credit card company refused the charges. “I just woke up to a couple of emails where my credit card provider declined a number of transactions,” he said. “So then I logged into GCP to have a look. When I look into transactions, I can see that all these charges are coming through. Some are declined, but previously, there’s like, one for $500, $1,000, or $2,000. These ones went through successfully.” He reached Google support to flag the spending, ask them what had caused it, and to shut it down, but it takes up to 36 hours for Google support technicians to be able to view a customer's usage. Google told The Register this is actually faster than industry standard, but for Fonseka, it was still infuriating. “This was probably the most frustrating part,” he said. “There’s this weird mechanism where they can detect enough to charge your card, but not enough to show you what it is being used on … The damage ended up being in the range of like AUD $17,000 ($12,000) .” But Fonseka said even if someone were to brute-force his API key, his Google Cloud budget cap was set at Tier 1, which was locked at $250, meaning he should never have been able to spend AUD$17,000 on AI services. “But when I logged in after the attack, it was set to like Tier 2 or Tier 3, which was like $100,000. I would have never set this,” he said. “I spoke to someone actually in Australia who was also affected by this, and he said that, based on your account standing they automatically upgrade the tier. So if they did, that is just a terrible decision, so they must have automatically upgraded mine.” Google told The Register it looks like Fonseka might be right. “What we believe happened in this instance you have shared is the attacker didn't change the tier; the developer’s usage (driven by the attacker) triggered Google’s automated systems to raise the ceiling, based on meeting Tier 3 qualification of Gemini API, which included at least $1,000 USD in payments to Cloud and 30 days since the first payment,” Google told The Register via email. In a revamped policy move announced March 16 Google said it would make it easier for users to access higher dollar quotas in GCP by reducing the spending qualifications to reach the next tiers. Additionally, the system “automatically upgrades you to the next tier as your usage grows.” “You get access to higher rate limits and increased monthly quota as soon as the criteria is met,” Google said on its blog titled “Giving you more transparency and control over your Gemini API costs” Customers like Fonseka in the first tier would be automatically moved to the next tier – $2,000 – if they spend $100, and then automatically to Tier 3 if they spend $1,000 and have been a customer for 30 days. Tier 3 has a spending cap between $20,000 and $100,000. Fonseka said he was tempted to call his credit card company and have them charge back the cost, but he fears that would likely result in the suspension of his project inside Google Cloud, which customers are relying upon. Danan told The Register that he is in the same boat. “Even though I had spend caps on it didn't really matter, like, all you get is alerts,” he said. “I still need Google APIs. I can't get kicked off because then my app won't work. We need the Maps API. So there's sort of a disincentive for you to report this is fraudulent activity to your credit card company.” Both Danan and Fonseka said they are still negotiating with Google to win a refund. ®
Categories: Linux fréttir

Foxconn confirms cyberattack after ransomware crew claims it stole confidential Apple, Nvidia files

Tue, 2026-05-12 22:02
Foxconn, a critical supplier for major hardware companies like Apple and Nvidia, on Tuesday confirmed a cyberattack affecting its North American operations after the Nitrogen ransomware gang listed the electronics manufacturer on its data leak site. “Some of Foxconn's factories in North America suffered a cyberattack,” a Foxconn spokesperson told The Register. “The cybersecurity team immediately activated the response mechanism and implemented multiple operational measures to ensure the continuity of production and delivery. The affected factories are currently resuming normal production.” Nitrogen ransomware criminals on Monday claimed to have breached the Taiwan-based company and stolen 8 TB of data comprising more than 11 million files. The miscreants say the leaks include confidential instructions, internal project documentation, and technical drawings related to projects at Intel, Apple, Google, Dell, and Nvidia, among others. Foxconn declined to confirm that these - or any - customers’ information was hoovered up in the digital intrusion. Nitrogen, which has been around since 2023, is believed to be one of the various ransomware offshoots that borrowed code from the leaked Conti 2 builder. And, in what may be very bad news for its latest victim, even paying the ransom demand may not guarantee recovery of encrypted files. In February, Coveware researchers warned that a programming error prevents the gang's decryptor from recovering victims' files, so paying up is futile. The finding specifically concerns the group's malware that targets VMware ESXi. This isn’t the first time Foxconn has been targeted by ransomware gangs. In 2024, LockBit claimed to have infected Foxsemicon Integrated Technology, a semiconductor equipment manufacturer within the Foxconn Technology Group. The same criminal crew also hit a Foxconn subsidiary in Mexico in 2022. ®
Categories: Linux fréttir

Google launches line of Android laptops festooned with Gemini AI

Tue, 2026-05-12 21:15
Google is rolling out a new line of laptops based on Android instead of ChromeOS, and using the opportunity to try and move upmarket from the budget-conscious Chromebooks – while also baking AI into every fissure of the system. The new line of so-called Googlebooks seems even more obtrusive about pushing embedded AI than Windows 11 embedding Copilot into everything. With the OS on Googlebooks, which the company touts as the best of Chrome OS and Android, even moving the cursor over an on-screen task such as the text of an email nags you to offload work to Gemini. Google was has been publicly planning to merge Android and ChromeOS for a while, with Android boss Sameer Sama saying last year that the Android codebase would be the core of the new platform. This gives the company a chance to break into the premium laptop market, using one of its core assets, the Android ecosystem, to differentiate from the kid-friendly and budget-oriented Chromebook lineup. While the laptops won't be coming until later this year, we can already see from the press materials and video demo that this new kind of notebook is meant to out-Copilot Microsoft. One of the main features demoed, Magic Pointer, activates when you wiggle the cursor and shows you contextual suggestions based on what you hover over. For example, in the video, Alexander Kuscher, Senior Director of Laptops and Tablets at Google, showed how hovering over the date in an email brought up options to view his schedule, craft a reply saying "I'm in town on May 19," or even use Google maps to suggest meetup spots. Having AI crammed into Windows Notepad seems quaint by comparison. Kuscher also showed how dragging images on a Googlebook can combine them. He dragged a photo of a nursery onto an image of a swath of wallpaper and a picture of a crib and the system generated a picture of the nursery with the crib and the wallpaper included. The Google exec pointed out that an act like combining photos normally involves logging into a chatbot, uploading the photos, and giving it a prompt. Here it was just drag and drop. No word on whether the system can use your photos as training data. Android apps will also work on Googlebooks, and users will also be able to launch them from the phones, much like Apple's iPhone Mirroring. In the demo, Kuscher showed Duolingo running in a portrait-shaped window on the desktop operating system as if it were on his phone. Google said that Googlebooks are being "built with premium craftsmanship and materials” by partners like Acer, ASUS, Dell, HP, and Lenovo. They also sport a Google-colored glowbar on the cover so everyone knows who owns your digital soul. Considering the RAM shortage and the fact that IDC expects PC shipments to decline by 11.3 percent in 2026, Google has picked a challenging time to come out with a whole new category of laptop. While the company has not released pricing, we can only imagine that Googlebooks will be significantly more expensive than Chromebooks, which are currently in the $200 to $500 range in the US. These new notebooks are likely to compete with premium consumer Windows and macOS laptops at a time when demand is declining and people are holding onto old devices longer. We see no evidence that Google is even targeting businesses and we doubt IT departments would be interested in the features the company has focused on. Google also announced the expansion of Gemini Intelligence onto high-end Android devices (i.e., Samsung Galaxy and Google Pixel devices) as part of Tuesday’s I/O preview, noting that it’s designed “to help your phone handle boring tasks for you.” Google provides examples like filling out online forms, summarizing websites, and even rewriting voice-to-text messages to get rid of pauses and other natural speech patterns that detract from the written word. Speaking of Chromebooks, we asked Google what will become of its budget hardware line with the release of the Googlebook, but we didn’t hear back. We imagine that they will probably continue to serve the educational market for some time. Google made several other announcements during Tuesday's presentation, including a new Pause Point feature in the upcoming Android 17 that follows in Apple’s steps by protecting you from your own worst instincts to scroll endlessly or waste half your day playing chess on your phone. It allows you to mark certain apps as "distracting" so that when you launch them, the phone asks you to take a deep breath and reconsider your actions, which is something Apple’s mindfulness app doesn’t do. To the bane of everyone tired of social media reaction videos, Google is also baking the format right into Android with Screen Reactions that will allow users to capture video of their device screen along with sticking themselves in the lower corner so they can regale everyone with their opinion about whatever they’re talking over. ®
Categories: Linux fréttir

Hollywood A-listers back proposed standard that would pay them when AI uses their likeness or work

Tue, 2026-05-12 20:57
AI models can take your written work, they can take your voice, and they can even take your likeness to use for training material and for creating content that looks exactly like it came from you. Now, some actors are promoting a new licensing spec designed to protect their famous faces and yours too. The newly formed public benefit non-profit is extending the Really Simple Licensing (RSL) spec developed by the RSL Internet Collective with the draft RSL Media Human Consent Standard (RSL-MEDIA) 1.0, which aims to cover creative works as well as people's names, likenesses, voices, and other identity attributes. The initial launch allows people to sign up and reserve an identifier that will serve as a key to structured data entered into the RSL Media public registry, scheduled to launch next month. The registry will allow people to verify their identities, set permissions governing the use of their works and likeness, encode those permissions for machine consumption, and verify that AI systems are checking declared permissions. Whether there will be any legal consequences for AI services that ignore registry settings remains to be seen. The data broker industry in the US hasn't exactly suffered due to the notional existence of "privacy rights." And public concern about non-consensual AI nudification and explicit deepfakes hasn't really put an end to that form of technological abuse or punished the social media sites distributing it. But this time, Hollywood has shown up. "AI technologies are expanding rampantly, essentially unchecked and unregulated," said celebrated actress and RSL Media co-founder Cate Blanchett, in a statement. "In order for humans to remain in front of these technologies, consent must be the first consideration. RSL Media is a simple, effective and free solutions-based technology for facilitating and activating consent. It’s also the industry’s first practical solution where people everywhere, not just public figures, can assert control over how their work is used by AI." Nikki Hexum, co-founder and CEO of RSL Media, said, "AI can’t respect rights it can’t see, and this means human consent is virtually invisible in this new digital era. The right to decide whether AI can use your work or identity should not be reserved for only those who can afford lawyers or have platforms big enough to be heard, it is a basic human right." That's not entirely correct. Rights do not need to be seen to be respected; due diligence prior to using material that may be copyrighted is expected. Ignorance of copyright does not excuse infringement, even if it might mitigate potential liability. AI model makers could have chosen to respect rights by default, by seeking permission to use data for training. They could have chosen to seek permission to crawl websites and could have heeded existing signals to crawlers like the Robots Exclusion Protocol. They could have chosen to abide by the requirements of open source software licenses in harvested code. They did not do so, because Silicon Valley prefers to ask forgiveness rather than seek permission. Permission is expensive; there wouldn't be much of an AI industry if that were the norm. The law may be one of the things broken by those applying Meta's shelved mantra "move fast and break things." So far, industry disinterest in seeking permission has worked well – AI companies have been held to account in only a few of the hundred-plus lawsuits objecting to AI content capture. The underlying RSL standard is slowly gaining adoption. The RSL Collective says more than 1,500 media organizations, brands, technology companies, and standards groups now support it following the launch of RSL 1.0 last December and the relevant RSL XML file can be seen at sites like The Guardian. While it's unclear what impact the RSL has had on AI biz behavior, extending the RSL to cover personal identity with the RSL-MEDIA standard may stir broader interest in AI rules and their enforcement. Or it may just affirm the XKCD comic about specifications and how they proliferate. There are already several similar protocols: TDM AI and TDMRep, Spawning's ai.txt, AI Preferences, not to mention a few that focus solely on images and commercial offerings like Cloudflare's Pay per crawl. But RSL Media may have a leg up thanks to the involvement of high-profile celebrities like Blanchett and endorsements from similarly well-known peers. "Of course artists and cultural creatives will inevitably be involved with AI," said Dame Emma Thompson in a statement. "At the moment, however, AI is merely stealing from us all. This is an urgent and essential initiative. It's also eminently doable, so let’s do it without delay." ® Editor's note: This story was amended post-publication with clarification about the relationship between RSL Media and the RSL Internet Collective.
Categories: Linux fréttir

US Army goes green-ish, wants soldiers munching on plant proteins

Tue, 2026-05-12 17:03
Eating in the field has never been fun for US Army soldiers. And they may soon face even stranger field rations than they do today: Alternative proteins delivered in formats ranging from powders and sauces to gels and semi-solids. The Army on Monday published a sources sought announcement to gather submissions from interested industry and academic partners in the "alternative protein sector," willing to help the branch develop rations that are lighter weight, have a longer shelf life, and could potentially be produced in combat-forward environments. According to the announcement, the Army is looking for submissions covering four areas: Technologies for developing alternative proteins, like fermentation and other biomanufacturing methods, meat alternative products for ration inclusion, consumer research seeking to "enhance the acceptability … of alternative proteins within a military population,” and food samples for government taste and performance evaluations. As an added element, the Army said that it wants ration products that meet its existing “stringent requirements for nutrition, shelf stability, and palatability,” though anyone who has served in the US Army and eaten field rations may have doubts about the military branch's commitment to palatability on its Meal, Ready-to-Eat (MRE). As a US Army veteran, this vulture can attest to an unfortunate level of familiarity with MREs, circa 2002. Beef frankfurters were famously one of the worst, as was the so-called “beef steak” meal that was more like a compressed loaf of meat leavings than an actual steak. The flavor didn’t matter at the end of the day, though, when you’d just marched 15 miles carrying 75 pounds on your back: You just needed sustenance, and even that five pack of frankfurters with a taste I shudder to recall sounded good under the right circumstances. The MRE menu lineup, which has changed several times in the past 20 years, includes a few vegetarian options, and it's those that make one of the Army’s requirements for this program so surprising. Civilians might be surprised to learn how popular the non-meat meals were, even among hardcore carnivores. The four or so vegetarian options in the overall MRE lineup were always the first to go when I was in. Not only did they replace military mystery MRE meat with something more appealing to eat out of an envelope, but they were actually tasty - relatively, of course. Vegetarian MREs also tended to be slightly less calorically dense than their animal-derived counterparts, so they included extra bits that made them an even bigger hit. Whether that would translate into soldiers embracing alternative proteins in future MREs isn’t a guarantee, of course. Most weren’t choosing the veggie MREs for alignment with their personal ethics so much as that they wanted a meal that didn’t suck. The Army’s goal of developing “lightweight and nutrient-dense ration solutions to reduce logistical burdens and physical load on warfighter” through the program is definitely a noble one. MREs get heavy quickly if you’re on a long field expedition, but the openness the Army is leaving in the announcement doesn’t make it sound like appetizing solutions could be the first to come out. “Gel/semi-solid formats, dry powder mixes, [and] sauce-style components” are all on the table, with the Army saying the format of “novel ready-to-eat formats … is at the offeror’s discretion.” In other words, future ration components could include gel packs stuffed with fermented mushroom protein and other nutrients, some form of unholy shake, or whatever else food scientists can come up with. Interested parties will need to move fast, though: As a sources sought announcement, this isn’t a solicitation, includes no promise the ideas will be given a research grant or procurement dollars, and has to be in by Friday, May 15, with no assistance from the government. The submissions the Army receives could help shape future solicitations in this space, however, meaning the MRE we currently know and … love … may eventually evolve into something rather more futuristic. Hopefully it tastes a bit better. One thing that soldiers will probably be thrilled about? No bugs in whatever field rations come next. "We are specifically excluding solutions related to cell-cultured, lab-grown meat or insect protein," the Army said, though we note that's only for the purposes of this particular announcement, so tomorrow's soldiers might still be subsisting on crickets and ants. ®
Categories: Linux fréttir

FCC walks back router update ban before it bricks America's network security

Tue, 2026-05-12 16:50
America's telco regulator has seen some sense over its ban on foreign-made routers, deciding that existing devices should continue receiving software and firmware updates after all. The Federal Communications Commission (FCC) has extended waivers covering certain foreign-made routers (and drones) already operating in the US, pushing the update deadline to at least January 1, 2029. Without the extension, updates would have been blocked as early as 2027. Back in March, the FCC updated its Covered List to include all foreign-made consumer routers, prohibiting the approval of any new models. This effectively banned any new kit made in other countries from being sold, but did not prevent the import, sale, or use of existing models that had previously been authorized. The policy stems from fears that foreign-made router pose a security threat. Because they handle network traffic, they could introduce vulnerabilities exploitable against critical infrastructure, and in the words of the FCC represent "a severe cybersecurity risk that could harm Americans." Miscreants have exploited security flaws in routers to disrupt networks or steal intellectual property, and routers are implicated in the Volt, Flax, and Salt Typhoon cyberattacks. The policy was widely regarded as flawed, not just because the vast majority of consumer router kit is made outside the US or built from components sourced abroad, but because vulnerabilities and security flaws are not limited to any particular geography, and appear in products from all brands and countries of origin, as noted by the Global Electronics Association (GEA). Blocking firmware updates, which typically deliver security patches for newly discovered flaws, also seemed a peculiar own goal for a regulator whose stated motivation is reducing network vulnerability. The FCC has belatedly recognized this, stating that its policies would have "had the effect of prohibiting permissive changes to the UAS, UAS critical components, and routers added to the Covered List in December and March. "This prohibition would be in effect even for Class I and Class II permissive changes - such as software and firmware security updates that mitigate harm to US consumers - because previously authorized UAS, UAS critical components, and routers are now covered equipment." The waivers now run until at least until January 1, 2029, falling into the final month of the Trump administration, when there is a chance this may be overlooked in the preparations for Trump’s successor. The FCC extension was met with some approval. Doc McConnell, head of policy and compliance at security biz Finite State said in a supplied remark: “I strongly support the FCC’s decision to allow firmware and software updates for already-authorized routers, including covered devices already deployed in the United States.” “The biggest practical security risk with routers is not only who made them, but whether they remain patched. When they stop receiving updates, known vulnerabilities remain exposed, attackers gain durable footholds, and consumers are left with equipment they cannot realistically secure on their own. “The original restriction risked creating exactly that problem: millions of deployed routers frozen in time, unable to receive security fixes. I appreciate the FCC recognizing that preventing updates could unintentionally make Americans less safe,” he added. However, as previously reported by The Register, the FCC’s Conditional Approval framework explicitly requires vendors seeking approval for new routers to submit plans to establish or expand manufacturing in America, with quarterly progress updates. As stated by the GEA, “The policy’s logic assumes that manufacturers can and will move production to the United States.” That might be an assumption too far. ®
Categories: Linux fréttir

Congress investigates Canvas breach as company pays ransom

Tue, 2026-05-12 16:14
The US Congress has summoned education tech firm Instructure's CEO Steve Daly to the Hill to explain how digital thieves breached its Canvas online platform twice within two weeks. In a letter sent to the digital learning giant late Monday - around the same time Instructure said it had reached an “agreement” with extortion crew ShinyHunters - the US House Homeland Security Committee “requested” that Daly or a “senior representative” schedule a briefing with the committee as part of its investigation into the hacks. “The briefing should address the circumstances of both intrusions, the nature and volume of data accessed, the steps Instructure has taken and is taking to contain the threat and notify affected institutions, and the adequacy of the company’s coordination with federal law enforcement and CISA,” Homeland Security Committee Chairman Andrew Garbarino (R-NY) wrote [PDF]. “With students at more than 8,000 institutions navigating final examinations and end of semester deadlines, the disruption of a platform that Instructure itself describes as serving more than 30 million active users globally is a matter of national concern,” Garbarino said. Also late Monday, the education tech giant said it "reached an agreement with the unauthorized actor involved in this incident." Both Instructure and ShinyHunters, the cyber gang that claimed to have stolen data affecting up to 275 million students, teachers, and staff, claimed that this “agreement” involved deleting all of the stolen files. In other words: the company paid the undisclosed extortion demand prior to the Tuesday deadline, at which time ShinyHunters said they would leak all of the 8,800 colleges, universities, and K-12 schools’ records. "We received digital confirmation of data destruction (shred logs)," Instructure said, adding "We have been informed that no Instructure customers will be extorted as a result of this incident, publicly or otherwise." The Reg has learned that ShinyHunters abused XSS vulnerabilities in Canvas' Free-for-Teacher learning software, and the bugs allowed the data thieves to obtain administrative access. During the first intrusion, which Instructure detected on April 29, the extortionists claimed to have stolen about 3.6 TB of uncompressed data, including usernames, email addresses, course names, enrollment information, and messages. On May 7, the crooks broke back into Canvas’ systems via the same vulnerability and injected JavaScript containing ransom demands directly into hundreds of Canvas school login portals, causing the ed-tech firm to take the platform offline for a day - during final exams and Advanced Placement testing for many. This is the second known security incident involving ShinyHunters and Instructure in less than a year. The extortion crew also breached Instructure's Salesforce environment in September 2025. Instructure plans to hold a public webinar on Wednesday with the leadership team “to detail information about the cyber attack and our activities to harden the system,” which will be held across “multiple time zones.” ®
Categories: Linux fréttir

Pages