Linux fréttir
Honda Retreats To Hybrids After Failed EV Bet Triggers Record $9 Billion Loss
An anonymous reader quotes a report from Electrek: Honda is waving the white flag. The Japanese automaker previewed two new hybrids set to launch by 2028 after taking an over $9 billion hit over its failed EV bet, leading to its biggest loss in company history. Honda admitted it was "unable to deliver products that offer value for money better than that of new EV manufacturers, resulting in a decline in competitiveness," after suddenly announcing plans to cancel three new EVs in the US in March, warning restructuring costs could reach 2.5 trillion yen ($15.7 billion).
After posting its first annual loss since it became a publicly traded company in 1957 on Thursday, Honda's CEO Toshihiro Mibe revealed the company's comeback plans. Honda is no longer planning to phase out gas-powered vehicles by 2040. Instead, Honda now aims "to achieve carbon neutrality by 2050," including a mix of EVs, hybrids, carbon-neutral fuels, and carbon-offset tech. Starting next year, Honda plans to begin introducing its next-gen hybrids, underpinned by a new hybrid system and platform. Honda said it aims to improve fuel economy by over 10% in its upcoming hybrids. The new system is expected to help cut costs by over 30% compared to Honda's current hybrid system.
By the end of the decade, Honda plans to launch 15 new hybrid models globally. In North America, its most important market, the company will introduce larger hybrids in the D-segment or above. Honda previewed two of the new hybrids during the business update: the Honda Hybrid Sedan Prototype and the Acura Hybrid SUV Prototype, which the company said will go on sale within the next two years.
Read more of this story at Slashdot.
Categories: Linux fréttir
NASA's Psyche mission set for a brief encounter with Mars
More than two years after launch, NASA's Psyche mission will whizz past Mars on May 15, using the planet's gravity to tweak its trajectory and accelerate on to its asteroid destination. The spacecraft, which was launched on October 13, 2023, will pass just 2,800 miles (4,500 kilometers) above the surface of the red planet at 12,333 mph (19,848 kph) on its way to the metal-rich asteroid, Psyche. In February, the spacecraft's thrusters were fired for 12 hours to refine its approach to Mars. That refinement played its part in today's flyby. However, it won't be until a Doppler shift is recorded in the signals from the spacecraft as it passes Mars that scientists will be able to definitively confirm its new speed and trajectory. These techniques are not new. Gravity assist maneuvers have been a thing since the dawn of the space age, and were theorized long before. One of the most famous beneficiaries is the Voyager mission, which took advantage of a rare planetary alignment to undertake a "Grand Tour" of Jupiter, Saturn, Uranus, and Neptune. The trajectory allowed significant propellant to be saved. And, of course, the use of gravity assists highlights the work undertaken by boffins in trajectory planning to calculate exactly how a spacecraft should be launched and what corrections are needed to achieve the required precision. Psyche is due to reach its destination in 2029, and the Mars flyby will allow scientists to check out the spacecraft's payload. For example, the multispectral imager will capture thousands of observations of Mars. According to NASA, Sarah Bairstow, Psyche's mission planning lead at the Jet Propulsion Laboratory in Southern California, said: "This is our first opportunity in flight to calibrate Psyche's imager with something bigger than a few pixels, and we’ll also make observations with the mission's other science instruments." A bit of bonus science is always welcome, as well as a rehearsal for the main event, when Psyche reaches its destination. "Ultimately, though, the only reason for this flyby is to get a little help from Mars to speed us up and tilt our trajectory in the direction of the asteroid Psyche," said Lindy Elkins-Tanton, principal investigator for Psyche at the University of California, Berkeley. "But if all our instruments are powered up, and we can do important testing and calibration of the science instruments, that would be the icing on the cake." ®
Categories: Linux fréttir
Anthropic urges Uncle Sam to kneecap China's AI ambitions before 2028
AI monger Anthropic wants America and its allies to tighten measures aimed at curbing China's AI progress, warning of the consequences if "authoritarian governments" take the lead rather than Uncle Sam. In a lengthy missive posted on its website, the San Francisco-based org says it expects AI to deliver "transformational economic and societal impacts" in the coming years, and whether the transition goes well depends on where the most capable systems are built first. Since the technology is advancing swiftly, democratic countries have only a limited time in which to act, Anthropic believes. The measures it wants to see are nothing new: enforcing tighter export controls on chips used for AI development, such as Nvidia's GPUs, and cutting off access to American AI models. Recent history suggests these controls "have been incredibly successful," it says. But if Chinese researchers are only several months behind the US in AI capabilities, as many experts estimate, how successful can those efforts have been? AI labs in China have only built models that come close to those in America because of their talent and their knack for exploiting loopholes to get around export controls, Anthropic claims, along with distillation attacks that "illicitly extract the innovations of American companies." Many will suspect this is Anthropic's chief motivation in calling for action against China. Back in February, the Claude model maker accused China-based rivals including DeepSeek of using distillation to train their models by siphoning knowledge from Anthropic's own. As The Register pointed out at the time, accusing China of copying, while using content created by others to train your own models, shows a staggering lack of self-awareness from the AI industry. Anthropic's sermon also shows blinkered thinking. It implies that China can only advance by riding on America's coattails, and is incapable of innovating. This is despite the shockwaves generated by the release of the DeepSeek R1 model early in 2025, believed to be on a par with the best US models. Numerous reports also indicate that Chinese organizations have made huge strides with domestically developed AI silicon, and Beijing even tried to discourage tech companies in the country from buying and using Nvidia chips. Anthropic sets out two scenarios for what the world could look like in 2028, a date when it expects "transformative AI systems" to have emerged. In the first scenario, America has "successfully defended its compute advantage," and "democracies set the rules and norms around AI." The second has China overtaking the US, leading to AI norms and rules being shaped by authoritarian regimes, with the best models enabling "automated repression at scale." Another problem with Anthropic's plan is that many countries, especially in Europe, view both American and Chinese AI supremacy as a threat to democracy. There is a concerted push in Europe for "digital sovereignty" to minimize reliance on US technology, for example. Others warn it could erode democracy in America itself. Anthropic can draw little comfort from the Trump administration, which has a constantly shifting attitude to China. Export controls were said not to be high on the agenda during the President's trip to Beijing this week, and it was reported that the US has now cleared around 10 Chinese firms to buy Nvidia's second-most powerful AI chip, the H200. ®
Categories: Linux fréttir
Exploited Exchange Server flaw turns OWA inboxes into script launchpads
Microsoft has confirmed a vulnerability in on-premises Exchange Server that could result in surprise script execution in victims' browsers. Tracked as CVE-2026-42897, the flaw affects Outlook Web Access (OWA) and can be triggered by a specially crafted email opened in OWA, assuming "certain interaction conditions are met." The prize for attackers is arbitrary JavaScript execution in the mark's browser context. The advisory describes the flaw as a spoofing vulnerability stemming from cross-site scripting, which will set alarm bells ringing for administrators, and it appears the vulnerability is being exploited. The bug was assigned a CVSS score of 8.1. Exchange Server 2016, 2019, and the latest version, Exchange Server Subscription Edition (SE), are all affected regardless of their update level. A mitigation has been released via the Exchange Emergency Mitigation (EM) Service. However, Microsoft warned the mitigation might break other things – inline images might stop working in the recipient's OWA reading pane (use attachments instead) and the OWA Print Calendar functionality might not work (use a screenshot or the Outlook Desktop client). Finally, OWA Light might not work properly. Microsoft deprecated this in 2024, so affected users should consider an upgrade. The mitigation can also be applied manually in scenarios where customers are not using the EM service. These might be disconnected or air-gapped environments – exactly the sort of environments where on-premises Exchange tends to linger. Microsoft is working on a full security update, although only the Exchange SE version will be publicly available. Exchange 2016 and 2019 customers will receive it only if enrolled in Period 2 of the Exchange Server Extended Security Updates (ESU) program. The second period of Exchange Server ESU kicked off this month, with Microsoft sternly warning that there would be no extensions past its end. The vulnerability does not affect Exchange Online. Microsoft has not given any details on how the exploit works, nor how widely it is being exploited. ®
Categories: Linux fréttir
Patch time for Cisco SD-WAN admins as vendor drops yet another make-me-admin zero-day
Cisco admins face emergency patch duty after Switchzilla disclosed a max-severity make-me-admin bug affecting Catalyst SD-WAN Controller and Manager. Switchzilla dropped an advisory for CVE-2026-20182 (10.0) on Thursday, saying that both components, formerly known as vSmart and vManage, were vulnerable in all deployment types, and that fixes were available. The bug allows unauthenticated remote attackers to bypass authentication and gain admin privileges on an affected system. According to Rapid7, whose researchers Stephen Fewer and Jonah Burgess found the vulnerability, attackers exploiting CVE-2026-20182 could then start issuing arbitrary NETCONF commands. It means they could steal data, intercept traffic, manipulate an organization's firewall rules, or just bring the network down, opening up opportunities for attackers of all stripes: state-backed, financially motivated, hacktivists – you name it. Offering a high-level overview of the vulnerability, Cisco said: "This vulnerability exists because the peering authentication mechanism in an affected system is not working properly. An attacker could exploit this vulnerability by sending crafted requests to the affected system. "A successful exploit could allow the attacker to log in to an affected Cisco Catalyst SD-WAN Controller as an internal, high-privileged, non-root user account. Using this account, the attacker could access NETCONF, which would then allow the attacker to manipulate network configuration for the SD-WAN fabric." Cisco confirmed that, in May 2026, it became aware that CVE-2026-20182 had been exploited as a zero-day, although it did not attribute the activity. The Cybersecurity and Infrastructure Security Agency (CISA) also added CVE-2026-20182 to its Known Exploited Vulnerabilities (KEV) catalog, which is reserved for the security flaws that are both actively being exploited and threaten federal agencies. The US cyber agency gave Federal Civilian Executive Branch agencies just three days to apply Cisco's patches. While CISA has set similarly short deadlines before, they are rare and typically reserved for vulnerabilities deemed especially urgent. There was no word of the bug being exploited in ransomware attacks. Cisco said in its advisory there are no workarounds available, and it "strongly recommends" applying the available fixes. Any admin responsible for their org's Cisco SD-WAN system should hunt through their logs, Cisco said, and be aware that indicators of compromise may appear among otherwise normal-looking operational logs. Specifically, they should be auditing the auth.log file at /var/log/auth.log for entries related to Accepted publickey for vmanage-admin from unknown or unauthorized IP addresses. Then, check those IP addresses against the configured System IPs that are listed in the Cisco Catalyst SD-WAN Manager web UI, the vendor said. Cisco thanked the Rapid7 researchers, who first reported the vulnerability in early March after looking into a separate authentication bypass zero-day in Cisco Catalyst SD-WAN Controller (CVE-2026-20127, 10.0) from February. ®
Categories: Linux fréttir
Americans Would Rather Have a Nuclear Plant In Their Backyard Than a Datacenter
A new Gallup survey found that 71% of Americans oppose having an AI data center built near them, making the facilities even less popular than nearby nuclear plants, which 53% oppose. The Register reports: When it comes to the reasons for opposing AI campuses, half of all respondents cite the effect on resources, with excess water usage and potential power grid constraints topping the list. Concern about loss of farmland and nature was surprisingly low, with just 7 percent mentioning this, but it is possible the scores are higher in rural areas. Quality-of-life concerns such as increased traffic were put forward by nearly a quarter, while a fifth mentioned higher utility bills.
Many were worried about AI specifically: that it would replace human workers, that they don't trust it, that it is moving too fast, and that the industry needs regulating. Perhaps the latter sentiment is why President Trump appears to have shifted his own position on the need for AI regulations. Conversely, those in favor of datacenters cite economic benefits, with 55 percent mentioning increased job opportunities, and 13 percent saying it is because of increased tax revenues.
[...] This being America in 2026, Gallup looked at how attitudes stack up depending on political affiliation. It found that Democrats, at 56 percent, are much more likely than Republicans to be strongly opposed to a server farm in their vicinity. But 39 percent of Republicans are also strongly opposed, while another 24 percent are somewhat averse to it, and only about a third are in favor. Gallup points out the contradiction: for AI usage to expand in the US, facilities that can handle the necessary computing power will have to be built. But most Americans appear to take a "not in my backyard" attitude to new bit barns, and that attitude has grown in strength.
Read more of this story at Slashdot.
Categories: Linux fréttir
X tells Ofcom it will finally check its moderation inbox
Britain's media regulator has extracted a set of promises from X over illegal hate speech and terrorist content, suggesting that even "free speech absolutism" eventually meets a compliance department. Under commitments accepted by Ofcom, X said it will review and assess reports of suspected illegal terrorist and hate content from UK users within an average of 24 hours, with at least 85 percent handled within 48 hours through its dedicated UK reporting channel. The company also committed to engaging with external experts on how its reporting systems work, following several organizations' complaints that they were unclear whether reports submitted to X were even being received, let alone acted on. X also said it would withhold access in the UK to accounts operated by or on behalf of terrorist organizations proscribed in Britain if the accounts are reported for posting illegal terrorist content. Ofcom said X will now submit quarterly performance data over a 12-month period so the regulator can monitor whether the company is actually sticking to those promises. "Following intensive engagement carried out by Ofcom's online safety team, X have committed to implementing stronger protections for UK users, which we will now monitor closely," said Oliver Griffiths, Ofcom's Online Safety Group Director. "We have evidence that terrorist content and illegal hate speech is persisting on some of the largest social media sites. We are challenging them to tackle the problem and expect them to take firm action." The regulator launched a compliance investigation in December to examine whether major social media platforms have adequate systems to address illegal hate and terrorist material. Ofcom said evidence gathered alongside organizations including Tech Against Terrorism, Tell MAMA, and the Antisemitism Policy Trust pointed to illegal hate and terror content remaining visible across some of the internet's largest platforms. Ofcom said the issue was of "particular concern" following several recent antisemitic incidents and attacks on Jewish sites in Britain, including attacks in Manchester, Golders Green, and recent arson attempts in London. The watchdog also made clear this is not the end of its scrutiny of X, reminding the platform that Ofcom's separate investigation including issues related to Grok is ongoing and that it will continue to probe X's broader illegal content compliance systems. ®
Categories: Linux fréttir
ZTE showcases at GSMA M360 LATAM 2026, driving future business model restructuring - AI & network two-way integration
Partner Content ZTE Corporation (0763.HK / 000063.SZ), a global leading provider of integrated information and communication technology solutions, participated in GSMA M360 LATAM 2026. Ms. Chen Zhiping, Chief International Ecosystem Representative of ZTE, delivered a keynote speech entitled "Driving Future Business Model Restructuring — AI & Network Two-Way Integration" at the conference. Ms. Chen provided an in-depth analysis of the industrial value of the two-way integration of AI and networks, sharing ZTE's achievements in the Latin American market over the past two decades, its AI-Native network innovation practices, and its full-scenario intelligent solutions, helping Latin American operators complete their strategic upgrade from "connectivity providers" to "digital economy enablers". Facing the AI industry wave, ZTE released its global strategic vision in 2025: "All in AI, AI for All, Becoming a Leader in Connectivity and Intelligent Computing". Ms. Chen stated that this strategy is highly aligned with the core concepts of this GSMA Summit. In the future, ZTE will move beyond traditional network connectivity services, continuously upgrade its basic network capabilities, and comprehensively expand its AI and intelligent computing business layout. Through a two-way integration model of AI empowering the network and the network supporting AI, ZTE will reconstruct a new business model adapted to the AI era and activate new growth momentum for the Latin American digital economy. In terms of AI-enabled network upgrades, ZTE has pioneered the AI-Native network concept, deeply embedding AI capabilities into all network layers and processes to maximize network efficiency and optimize costs. In the wireless network field, ZTE's new 5G BBU integrates native intelligent computing capabilities, effectively improving the overall efficiency of hardware and software resources and increasing cell throughput by 20%. Simultaneously, by combining Super-N high-performance power amplifiers and AI intelligent optimization technology, equipment energy consumption is reduced by 38%. Currently, AAU and RRU products equipped with this technology have been deployed on a large scale in several Latin American countries, including Chile, Ecuador, Bolivia, Brazil, and Peru, with over 37,000 units deployed to date, saving local operators millions of dollars in electricity costs annually and achieving efficient, green, and intelligent network upgrades. Built upon AI-Native technology, the AIR Net advanced intelligent network solution enables commercial deployment of "autonomous driving" for networks, comprehensively revolutionizing operator operation and maintenance models and reducing overall TCO. This solution has already been commercially deployed in multiple locations globally. Currently, ZTE's intelligent network capabilities have obtained authoritative L4-level certification from the TM Forum, and its self-developed Co-Claw enterprise-level intelligent agent has been fully implemented internally, continuously improving network automation and intelligence levels and helping operators move towards advanced intelligent networks. In response to the complex and diverse network environment in Latin America, ZTE continues to implement scenario-based coverage solutions to bridge the regional digital divide. In indoor scenarios, ZTE has partnered with Chilean company Millicom to deploy the Qcell solution, achieving stable gigabit coverage throughout buildings. In remote rural scenarios, ZTE collaborates with Brazilian company Claro to implement the RuralPilot simplified rural network solution, addressing network coverage challenges in the vast Amazon region with its low cost and ease of maintenance. ZTE also offers a wide range of home coverage solutions, precisely matching the networking needs of different regions and scenarios in Latin America. Ms. Chen Zhiping stated that ZTE will continue to be rooted in the Latin American market, deepen the two-way integration and innovation of AI and networks, and continue to implement green, efficient, and intelligent full-stack ICT solutions to help local operators complete their strategic transformation, upgrade from traditional connectivity service providers to digital economy enablers, comprehensively meet the intelligent needs of industries and families in all scenarios, and work together to build a smart, inclusive, and sustainable new digital ecosystem in Latin America. Contributed by ZTE.
Categories: Linux fréttir
OpenAI caught in TanStack npm supply chain chaos after employee devices compromised
OpenAI says attackers behind the TanStack npm supply chain compromise stole internal credentials after reaching two employee devices, forcing the company to rotate signing certificates for several desktop products. The company disclosed this week that it had been caught up in the wider "Mini Shai-Hulud" campaign targeting npm ecosystems and developer infrastructure, though it said there was no evidence that customer data, production systems, or deployed software were compromised. OpenAI said the incident happened during a phased rollout of new supply chain security controls introduced after a previous Axios-related incident. According to the company, the two compromised employee devices had not yet received updated package management protections that would have blocked the malicious dependency. The attackers carried out "credential-focused exfiltration activity" against a limited set of internal repositories reachable from the affected employee machines, according to OpenAI. It said "only limited credential material was successfully exfiltrated from these code repositories." That was apparently enough to trigger a precautionary reset across multiple products. OpenAI is rotating the certificates used to sign macOS versions of ChatGPT Desktop, Codex App, Codex CLI, and Atlas, and is requiring users to update the affected software by June 12. The incident ties OpenAI to the increasingly messy supply chain campaign that has spent the past several weeks worming through npm ecosystems, CI/CD infrastructure, and GitHub Actions workflows. Security firm Socket linked the TanStack compromise to the broader "Mini Shai-Hulud" operation, which abused poisoned automation workflows and stolen publishing credentials to push malicious package updates into trusted software pipelines. Researchers tracking the wider Mini Shai-Hulud campaign have connected the activity to a threat group known as TeamPCP, which appears to have developed an unhealthy interest in poisoning npm ecosystems and rifling through developer credentials. TanStack confirmed this week that 84 malicious package versions spanning 42 @tanstack/* packages had been published after attackers compromised parts of its release infrastructure. The poisoned packages were designed largely to steal credentials, including GitHub tokens, cloud secrets, npm credentials, and CI/CD authentication material. The campaign appears linked to earlier Mini Shai-Hulud attacks involving SAP-related npm packages, suggesting the same credential-stealing operation is spreading across multiple developer ecosystems. OpenAI said it is continuing to investigate the incident and monitor for any downstream abuse tied to the stolen credentials. The reassuring news is that OpenAI says no production systems were breached. The less reassuring news is that attackers keep getting deeper into the software assembly line before anybody notices. ®
Categories: Linux fréttir
Fusion for the future: XLSMART and ZTE partnering for a boundless digital Indonesia
Partner Content In Indonesia, the magic of “Bumbu”, that perfect spice blend, creates unforgettable flavors. Today, in the digital world, an even grander "fusion" is taking place. Facing the challenge of unifying seperate networks across Indonesia's diverse geography, XLSMART partnered with ZTE on a landmark dual-network convergence project, integrating over 20,000 4G base stations and deploying more than 7,000 new 5G sites in just eight months. The initiative has launched the country's first nationwide 5G blanket coverage network, validated by Ookla as the fastest 5G network in H2 2025. Leveraging digital-intelligent tools and ecosystem collaboration, the project significantly enhanced coverage, capacity, and user experience for 73 million subscribers — turning complex delivery challenges into measurable gains in speed and efficiency. Fusion for the Future. Watch how the converged network is powering Indonesia's digital growth. Contributed by ZTE.
Categories: Linux fréttir
UK reloads artillery plans with £1B remote-control howitzer order
The British Army is to get 72 next-gen mobile artillery units, in the shape of a remote-controlled howitzer (RCH) module that mounts onto the Boxer armored vehicle already in service. The Ministry of Defence (MoD) announced a £1 billion ($1.35 billion) contract to provide the Army with a modern mobile system capable of providing artillery support against targets up to 70 km (44 miles) away. First deliveries of the RCH 155 units are expected in 2028, with a "minimum deployable capability" expected before the end of the decade. It follows a £52 million early capability demonstrator contract signed in December 2025. The RCH 155 is basically a 155 mm gun housed in a turreted artillery module mounted on the Boxer drive module. It is an auto-loading weapon, capable of firing eight rounds per minute. The unit features a fire control computer with integrated ballistics calculation, plus radio data transmission to a remote artillery control system. Boxer is an eight-wheeled (8x8), all-terrain vehicle designed to take a number of different bolt-on mission modules allowing it to fulfill various roles. The British Army has initially chosen just a few of these types, primarily the troop carrier variant, but also the ambulance module and command vehicle unit. According to the MoD, the barrel, breech, recoil system, and trunnions will be manufactured by German defense biz Rheinmetall at its large-caliber production facility in Telford, using British steel supplied by Sheffield Forgemasters. The Boxer drive modules/chassis, engine, and drivetrain that the weapon system sits on will be manufactured by the UK division of pan-European defense firm KNDS in Stockport. The Army is to receive a total of 623 of these. A new mobile artillery platform was needed to replace the UK's aging fleet of AS-90 self-propelled howitzers. These could easily be mistaken for a tank, thanks to their tracked chassis and turret-mounted gun. The last of these were donated to Ukraine over the past few years to help it fight Russia. The UK also procured a small number (14) of Archer mobile artillery systems as a stop-gap while a successor for AS-90 was selected. This is an automated 155 mm gun mounted on a 6x6 articulated truck chassis. "This major investment is defence delivering for the battlefield and for Britain's economy," said Defence Secretary John Healey MP. "By securing next-generation artillery with Germany, not only are we rearming to strengthen NATO against growing Russian aggression but also creating highly skilled jobs here in Britain." Ironically, Britain was one of the earliest partners in the Boxer joint venture, but withdrew from it in 2003 to focus on a different program, the Future Rapid Effect System (FRES). One strand of FRES eventually led to what is now known as the Ajax family of armored vehicles. You may have heard of it. The UK government announced it was rejoining the Boxer program in 2018 in order to meet its Mechanized Infantry Vehicle (MIV) requirement. ®
Categories: Linux fréttir
Britain's latest civil servant is a chatbot trained on GOV.UK misery
After years of turning public services into a maze of dead links, phone queues, and eligibility calculators, the UK government has unveiled the inevitable next step: an AI chatbot. The UK government on Friday announced the launch of "GOV.UK Chat," a generative AI assistant bolted into the GOV.UK app and trained on tens of thousands of pages of official guidance that Whitehall is boldly pitching as the "most comprehensive government-built chat tool in the world." Ministers say the system will help people navigate everything from maternity pay and retirement benefits to driving licenses and startup grants without having to dig through the bureaucratic swamp that is modern Britain. According to the government, some public sector call centers handle around 100,000 calls a day, which helps explain why ministers are suddenly very enthusiastic about citizens talking to software instead. Technology Secretary Liz Kendall said people fed up with being stuck on hold should not have to spend hours wading through online guidance either, which sounds suspiciously like somebody inside government has finally used GOV.UK. "For too long, navigating government has felt like a full-time job," she said. "Whether you're a parent trying to find out what childcare you're entitled to, a first-time buyer working out which schemes you can access, or someone approaching retirement, you shouldn't have to spend time trawling through hundreds of web pages to get a straight answer." The rollout comes just months after polling showed plenty of Brits are already uneasy about AI spreading through public services. Concerns ranged from privacy and job losses to fears that dealing with the government will eventually mean getting stuck in an automated support maze when something important goes wrong. The government said human support will still be available alongside the chatbot, at least for the time being. Ministers are keen to stress that GOV.UK Chat is not deciding who gets benefits or owes tax. Right now, the system mostly pulls together existing guidance, calculators, and links from across GOV.UK rather than making decisions itself. Given Whitehall's uneven history with large technology projects, that's probably a wise decision. Still, it is not hard to see where this is heading. Today, the chatbot helps you find childcare support. A few years from now, it will probably be explaining why an algorithm flagged your wheelie bin for suspicious behavior. ®
Categories: Linux fréttir
MPs want social media treated more like unsafe toys than harmless apps
British MPs are urging the government to tighten online safety laws, arguing social media companies should face the same kind of scrutiny as other products linked to serious harm. In a letter to Liz Kendall and Kanishka Narayan, shared with The Register, the UK's Science, Innovation and Technology Committee said there is now "strong and consistent evidence" linking social media use to harms affecting young people and warned that "no action is not an option." The committee, chaired by Chi Onwurah, said the current system leaves social media companies free to grow their youth user bases while avoiding meaningful responsibility for the subsequent fallout. "The status quo, where social media companies are neither accountable nor responsible for preventing harms, isn't acceptable," Onwurah said. "If any other consumer product caused these harms, it would've been recalled or changed." The intervention forms part of the government's "Growing up in the online world" consultation and follows a March evidence session examining arguments for and against restricting social media access for under-16s. The committee said it heard evidence from clinicians, bereaved parents, academics, child safety groups, and experts studying Australia's social media age limits, as well as accounts from young people and families concerned about harmful content and the effect social media is having on children's wellbeing. While the MPs stopped short of explicitly endorsing a blanket social media ban for teenagers, the letter makes clear the committee thinks ministers have spent too long relying on voluntary action from platforms whose business models still reward engagement above pretty much everything else. The committee said existing age restrictions should be properly enforced using "effective and privacy-preserving" age verification systems – rather than checks that can be bypassed by a drawn-on mustache – and called for stronger legal obligations requiring companies to filter illegal content and to block children from viewing harmful material. The letter also revisits the committee's earlier concerns about recommendation algorithms and how platforms deal with harmful and illegal posts, areas where MPs say previous proposals for reform went nowhere. MPs are now urging ministers to revisit those recommendations and bring forward fresh online safety legislation in the next parliamentary session. Particular attention was paid to algorithms and addictive design features. The committee argued that infinite scrolling and similar engagement mechanics should be designed out of platforms entirely, and warned that social media companies cannot keep pretending they are passive hosts while their recommendation systems actively shape what users see. The letter also warned that gaps in the UK's Online Safety Act mean some AI chatbots operating on closed databases currently fall outside the regime, something MPs said must be fixed before the next generation of online platforms disappears into yet another regulatory blind spot. ®
Categories: Linux fréttir
SpaceX Unveils Sweeping Starship V3 Upgrades
SpaceX has detailed major Starship V3 upgrades ahead of a launch targeted as early as May 19. The changes are meant to move Starship closer to its core goals: rapid reuse, Starlink deployment, orbital refueling, and eventually Moon and Mars missions. Longtime Slashdot reader schwit1 shares a report from Teslarati: Here is an explicit, broken-down list of the key changes, first starting with the changes to Super Heavy V3:
- Grid Fin Redesign: Reduced from four fins to three. Each fin is now 50% larger and stronger, repositioned for better catching and lifting performance. Fins are lowered on the booster to reduce heat exposure during hot staging, with hardware moved inside the fuel tank for protection.
- Integrated Hot Staging: Eliminates the old disposable interstage shield. The booster dome is now directly exposed to upper-stage engine ignition, protected by tank pressure and steel shielding. Interstage actuators retract after separation.
- New Fuel Transfer System: Massive redesign of the fuel transfer tube -- roughly the size of a Falcon 9 first stage -- enables simultaneous startup of all 33 Raptors for faster, more reliable flip maneuvers.
- Engine Bay/Thermal Protection: Engine shrouds removed entirely; new shielding added between engines. Propulsion and avionics are more tightly integrated. CO? fire suppression system deleted for a simpler, lighter aft section.
- Propellant Loading Improvements: Switched from one quick disconnect to two separate systems for added redundancy and reduced pad complexity.
Next, we have the changes to Starship V3:
- Completely Redesigned Propulsion System: Clean-sheet redesign supports new Raptor startup, larger propellant volume, and an improved reaction control system while reducing trapped or leaked propellant risk.
- Aft Section Simplification: Fluid and electrical systems rerouted; engine shrouds and large aft cavity deleted.
- Flap Actuation Upgrade: Changed from two actuators per flap to one actuator with three motors for better redundancy, mass efficiency, and lower cost.
- Faster Starlink Deployment: Upgraded PEZ dispenser enables quicker satellite release.
- Long-Duration Spaceflight Capability: New systems for long orbital coasts, orbital refueling, cryogenic fluid management, vacuum-insulated header tanks, and high-voltage cryogenic recirculation.
- Ship-to-Ship Docking + Refueling: Four docking drogues and dedicated propellant transfer connections added to support in-space refueling architecture.
- Avionics Upgrades: 60 custom avionics units with integrated batteries, inverters, and high-voltage systems (9 MW peak power). New multi-sensor navigation for precision autonomous flight. RF sensors measure propellant in microgravity. ~50 onboard camera views and 480 Mbps Starlink connectivity for low-latency communications. "Believe it or not, there's more," writes schwit1. "Two years ago, the biggest and most powerful rocket ever flown was Starship V1. Last year, it was Starship V2. V3 is about to become the biggest and most powerful rocket ever flown -- but don't worry, the company already has plans for V4."
Read more of this story at Slashdot.
Categories: Linux fréttir
On-call techie decided job was done and hit the bottle – just before his pager went off
ON CALL Welcome to another installment of On Call, The Register's weekly reader-contributed column that celebrates the IT professionals who put their lives on pause to provide tech support at all hours. This week, meet a reader we'll Regomize as "Jemaine." In the early 1990s, he found himself in Hong Kong working as a database specialist on VAX/VMS systems. "We'd built a billing application for a telco client in Macau, and it had been running happily for some time," he told On Call. By the time the system needed its first major OS upgrade, Jemaine was therefore happy for the local crew to handle the job. His client had other ideas and, despite also arranging for two DBAs to be present during the upgrade, insisted he show up. This was not a hardship because the job coincided with the Macau Grand Prix and Jemaine wasn't required to be on site. The client had therefore provided him with a hotel room that, as luck would have it, had a view of the track! "A couple of friends ended up crashing my room, and we spent the weekend watching insane drivers hurl cars around an absurdly tight street circuit," Jemaine admitted. The client never called or paged, so after the race Jemaine was confident the upgrade was going well. He and his friends therefore consumed "several bottles of rich Portuguese red wine" and ordered a sumptuous meal. "Dessert had just arrived when my pager went off," he told On Call. Jemaine poured himself into a cab to his client's office and found a situation he described as "vague but clearly serious" because the billing application wouldn't start. "Judging by the silence and the stoic expressions, everyone was quietly panicking," Jemaine wrote. He soon learned that the client had already tried to fix the app by reinstalling the OS twice and had now decided the database was the source of the problem. Jemaine was told to wait while the DBAs reinstalled the database, which "gave me time to sit in a back room and sober up slightly," he admitted to On Call. The database rebuild finished at about 2 am, but the application still refused to start. The client then turned to Jemaine. "I was summoned and interrogated by the systems team," he said, and ran a quick check that showed the database was perfectly healthy – but the batch scheduler wasn't running. To probe that problem, Jemaine asked to speak with the lead developer – who, it turned out, was not on site. "An urgent page was sent, and fortunately he called back quickly. His suggestion was to step through the code. This meant compiling a large COBOL program I'd never seen before in DEBUG mode, then single-stepping through it over the phone with the developer." By now, an increasingly anxious semicircle of client staff was watching Jemaine's every move, and he felt like they were silently shifting blame in his direction. "At around 4 am, we found the failure point: batch queue submission. The call was returning a null error code. The developer was baffled." "I reached for the physical manual to see what the function actually did," Jemaine wrote. "And then, for reasons I still credit to the Portuguese wine gods, I asked a simple question: 'What account did you test this under?'" The developer immediately replied: "Administrator." Jemaine asked the OS upgrade team to run the application with administrator privileges, and it immediately worked. "The OS upgrade had introduced a new permission requirement for submitting jobs to the batch queue," Jemaine told On Call. So this was very much not his problem, and he was able to excuse himself and stagger home as the Sun started to rise. "Nobody from the company ever mentioned the incident to me again," he told On Call. "And I can't remember the name of the wine we were drinking." Have you been on call, decided nothing could possibly go wrong, and then been caught out? If so, click here to send On Call an email so we can tell your story on a future Friday. ®
Categories: Linux fréttir
AWS racks M3 Ultra Macs that boast specs you can’t currently buy
Amazon Web Services has done something many others can’t achieve: Buy a bunch of Apple’s Mac Studio computers. Mac Studio is Apple’s workstation-grade machine and has been hard to find in recent weeks as Cupertino struggles to find enough RAM to fill them, and AI enthusiasts snap up stock to run tools like OpenClaw. At the time of writing, Apple advises buyers they’ll need to wait nine or ten weeks for a Mac Studio to arrive. The cloudy Macs AWS has racked and stacked pack Apple’s M3 Ultra SoC, Cupertino’s most powerful chip. Apple currently sells the Mac Studio with up to 96GB of RAM. AWS on Thursday started offering a cloudy M3 Ultra with 256GB of unified memory, a configuration The Register did not see as an option on Apple.com while preparing this article. The cloudy M3 Ultra machines run on actual Mac Studios packing a 28-core CPU, 60-core GPU, and 32-core Neural Engine. At the time of writing, AWS hadn’t updated its list of EC2 instance types to include the new M3 instances, so we can’t tell you what they’ll cost or if the cloud giant has departed from its past practice of renting bare metal machines rather than macOS VMs. Apple allows users to create and run macOS virtual machines, but only on Apple hardware and allows just two VMs per host. Cupertino also restricts use of VMs to four purposes: software development; testing during software development; using macOS Server; and personal, non-commercial use. AWS recommends its cloudy Macs as an ideal platform to build and test apps for all of Apple’s operating systems – even the visionOS that powers its unloved Vision Pro VR goggles. Amazon’s M3 Ultra Mac Studios only made it into two regions – US East and US West (Oregon) – so users elsewhere who fancy a cloudy Mac but need lower latency will have to endure the very on-prem experience of waiting for hardware to show up. ®
Categories: Linux fréttir
Musk Accused of 'Selective Amnesia', Altman of Lying As OpenAI Trial Nears End
An anonymous reader quotes a report from Reuters: A lawyer for Elon Musk hammered at the credibility of OpenAI CEO Sam Altman on Thursday, near the end of a trial over whether to hold the ChatGPT maker and its leaders responsible for allegedly transforming the nonprofit into a vehicle to enrich themselves. OpenAI's lawyers fought back, claiming the world's richest person waited too long to claim OpenAI breached its founding agreement to build safe artificial intelligence to benefit humanity, and couldn't claim he was essential to its success. "Mr. Musk may have the Midas touch in some areas, but not in AI," said William Savitt, a lawyer for OpenAI. "To succeed in AI, as it turns out, all Mr. Musk can do is come to court."
The claims were made during closing arguments of a trial in the Oakland, California, federal court. [...] In his closing argument, Musk's lawyer Steven Molo told jurors that five witnesses, including Musk, former OpenAI board members and former OpenAI Chief ScientistIlya Sutskever, testified that Altman was a liar. Molo also noted that during cross-examination on Tuesday, Altman did not say yes unequivocally when asked if he was completely trustworthy and did not mislead people in business. "Sam Altman's credibility is directly at issue in this case," Molo said. "If you don't believe him, they cannot win."
Molo accused OpenAI of wrongfully trying to enrich investors and insiders at the nonprofit's expense, and failing to prioritize AI's safety. He also challenged Brockman's goals for the business, citing Brockman'sstatementthat his own OpenAI stake was worth nearly $30 billion. "The arrogance, the lack of sensitivity, the failure to account for just common decency is really, really abhorrent." Musk also accused Microsoft, which invested $1 billion in OpenAI in 2019 and $10 billion in 2023, of aiding and abetting OpenAI's wrongful conduct. "Microsoft was aware of what OpenAI was doing every step of the way," Molo said.
Sarah Eddy, another lawyer for the OpenAI defendants, accused Musk and his legal team in her closing argument of resorting to "sound bites and irrelevant false accusations."
Eddy said by 2017, everyone associated with OpenAI -- including Musk, then still on its board -- knew it needed more money to fulfill its mission than it could raise as a nonprofit. "Mr. Musk wanted to turn OpenAI into a for-profit company that he could control," she said. "But the other founders refused to turn the keys of AGI (artificial general intelligence) over to one person, let alone Elon Musk."She also said if Musk truly believed AI should serve humanity, he would not have pushed to fold OpenAI into his electric car company Tesla, or made his rival xAI a for-profit company.
Musk had a three-year statute of limitations to sue, and OpenAI's lawyers said his August 2024 lawsuit came too late because he knew several years earlier about OpenAI's growth plans.
Eddy expressed disbelief that Musk claimed he did not read a four-page term sheet in 2018 discussing OpenAI's plan to seek outside investments. "One of the most sophisticated businessmen in the history of the world" wouldn't have "stuck his head in the sand," Eddy said. Savitt accused Musk of having "selective amnesia." Microsoft's lawyer Russell Cohen said in his closing statement that Microsoft wasn't involved in the key events of the case, and was "a responsible partner at every step." On Monday, the nine-person jury is expected to begin deliberating. The judge and lawyers will also return to court to discuss possible remedies if Musk wins, including how OpenAI should be restructured and what damages might be awarded. If Musk loses, there will be no remedies to consider.
Recap:
OpenAI Trial Wraps Up With 'Jackass' Trophy For Challenging Musk (Day Eleven)
Sam Altman Testifies That Elon Musk Wanted Control of OpenAI (Day Ten)
Microsoft CEO Satya Nadella Testifies In OpenAI Trial (Day Nine)
Sam Altman Had a Bad Day In Court (Day Eight)
Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (Day Seven)
Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six)
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
Read more of this story at Slashdot.
Categories: Linux fréttir
Possible Samsung strike puts even more pressure on memory pricing
RAM prices have risen after negotiations between Samsung and a union representing many of its workers collapsed – and the union has now called for a lengthy strike to start next week. The National Samsung Electronics Union (NSEU) has noticed the extraordinary profits the Korean giant is making thanks to the high price of RAM, and wants the company to boost members’ pay with bonuses tied to profits. Talks on that idea have stalled, and pointing out that Samsung pays memory-makers less than peers at SK Hynix hasn’t found a receptive ear. The Union therefore plans to start an 18-day strike next week. If the industrial action goes ahead, it has the potential to disrupt memory production, which would mean further shortages at a time DRAM is already expensive and hard to acquire due to rampant demand for AI infrastructure. Short-term memory prices have therefore spiked in the last 72 hours – which ironically will just increase Samsung’s profit’s even more! The Union has accused Samsung of not taking its arguments seriously, and South Korea’s government has stepped in with attempts to bring two parties to the table for fresh talks that lawmakers hope will resolve the situation because The Spice Must Flow. Or maybe The RAM Must Roll. Samsung recently posted almost $40 billion profit for a single quarter, thanks largely to memory sales. That enormous sum, and others like it reported by Korean companies who sell memory and other products in demand from AI builders, caught the attention of Yong-Beom Kim, South Korea’s Chief Presidential Secretary for Policy – a ministerial role. Using his personal Facebook page, Kim suggested funneling a portion of AI profits into a “national dividend fund” that can be used to improve South Korea’s long-term prospects. His post mentions Norway’s sovereign wealth fund, which famously siphoned off revenue from oil sales and invested it in shares to create assets worth over $2 trillion. Vendors often tell The Register “data is the new oil” so maybe Kim is on to something – although the metaphor may not work well when one considers current events in the Strait of Hormuz and their effect on the world. ®
Categories: Linux fréttir
Cerebras risked it all on dinner plate-sized AI accelerators a decade ago. Today it’s worth $66 billion
Cerebras Systems has done what many chip startups aspire to but few ever achieve. On Thursday, the company and long-time Nvidia rival raised $5.55 billion in an initial public offering (IPO), making the company worth more than $66 billion on its first day of trading. The milestone didn’t happen overnight. It took more than a decade, a radically different approach to chipmaking, and two separate attempts at an IPO to pull off. Founded in 2015 by former SeaMicro head Andrew Feldman, Cerebras Systems' first chips looked nothing like GPUs or AI accelerators of the time. The bet that put Cerebras on the map At the time, most high-end GPUs used dies measuring roughly 800 square mm that’d been cut from a larger wafer. Eight or more of these GPUs would typically be stitched together by high-speed interconnects, like NVLink, which allowed them to pool their resources and behave like one big accelerator. Rather than cutting up a wafer into smaller chips just to reconnect them again, Cerebras figured why not etch all that compute into a wafer-sized chip? And so the Wafer-Scale Engine (WSE), a giant chip measuring 46,225 square mm — about the size of a dinner plate — was born. Cerebras' first chips weren’t just bigger; they were purpose-built for AI training and sported a novel compute engine designed to speed up the highly sparse matrix multiply-accumulate operations common in deep learning. This hardware sparsity took advantage of the fact that large portions of a neural network’s parameters ultimately end up being zeros, allowing Cerebras to boost the effective computational output of its first-gen WSE accelerators from 2.65 16-bit petaFLOPS to 26.5. Nvidia added support for sparsity in its Ampere generation a year later, but it only worked for a specific ratio (2:4), limiting its effectiveness to select use cases. To train a model, up to 16 of these chips could be ganged together over a high-speed interconnect. This was kind of important too, because unlike GPUs, which stored model weights in HBM or GDDR memory, Cerebras' chips were almost entirely reliant on on-chip SRAM. Although SRAM is insanely fast, which is why it’s used for caches in basically every modern processor, it’s not particularly space efficient. While Cerebras' first wafer-scale accelerator could theoretically reach 9 petabytes per second of memory bandwidth, it was limited to just 18 GB of capacity at a time when Nvidia was already at 32 GB per GPU and about to make the leap to 40 GB or even 80 GB per chip. Still, the approach was performant enough that for its second-generation wafer-scale accelerator, launched in 2021, Cerebras doubled down on the architecture. While the WSE-2 wasn’t physically larger, the move to TSMC’s 7nm process tech allowed the company to more than double the transistor count, compute density, SRAM capacity, and bandwidth. The chips also supported larger clusters, scaling up to 192, though in practice these clusters were usually smaller at between 16 and 32 systems per site. It was also around this time that Cerebras caught the attention of United Arab Emirates-based cloud provider G42, which quickly became its largest financier. By mid-2023, the chip startup had secured orders worth $900 million for nine supercomputing sites with a 36 exaFLOPS of super sparse AI compute between them. A year later, Cerebras made the jump to TSMC’s 5nm process with the WSE-3 and while memory and bandwidth only saw modest gains, compute once again doubled now topping a 125 petaFLOPS of Sparse (12.5 petaFLOPS dense) compute at 16-bit precision. Cerebras’ CS-3 systems have now seen the largest deployment, and now power the majority of the Condor Galaxy cluster it built for G42, as well as several new sites across North America and Europe. Cerebras' inference inflection Up to mid-2024, Cerebras' primary focus had been on training, but then the company announced a boutique inference-as-a-service offering to rival those from competing chip startups like Groq and SambaNova. It turns out, Cerebras’ latest AI accelerators’ massive SRAM capacity not only made them potent training accelerators but particularly well suited to high-speed LLM inference. In its third iteration, Cerebras' wafer scale accelerators boasted more memory bandwidth than they could realistically use. At 21 PB/s, the chip’s memory is nearly 1000x faster than Nvidia’s new Rubin GPUs. This, along with a dash of speculative decoding, allowed Cerebras to generate tokens far faster than any GPU-based system of the time. Even today, Cerebras routinely ranks among the fastest inference providers in the world. According to Artificial Analysis, Cerebras' kit can churn out more than 2,200 tokens a second when running GPT-OSS 120B High, 2.8x faster than the next closed GPU cloud Fireworks. Cerebras didn’t know it at the time, but its inference platform would be a much bigger business than anyone had expected, and in September 2024, the company submitted its S-1 filing to the SEC to take the company public. Almost exactly a year later, Feldman quietly pulled its S-1, delaying its IPO. His reasons? The company’s initial S-1 filing was rather concerning, as it showed G42 was responsible for 87 percent of its revenues. But in the year since launching its inference platform, Cerebras had racked up several high-profile customer wins from big names like Alphasense, AWS, Cognition, Meta, Mistral AI, Notion, and Perplexity. Feldman explained that the initial S-1 didn’t yet show the financial results of this growth. The company believed it would have a better story to tell investors later down the road. Cerebras' inference platform has only grown since then. The company has steadily expanded its footprint while announcing deeper relationships with AWS and adding OpenAI as a customer. On Thursday, the startup officially joined the NASDAQ under the ticker CBRS, having raised $5.5 billion in the process. Shares skyrocketed nearly 70 percent on the first day of trading, as investors poured their money into a new way to play the AI boom. An IPO is something many startups aspire to but few, especially in the cut throat world of semiconductors, ever accomplish. What happens now From a technical perspective, Cerebras is overdue for a refresh. The WSE-3 accelerators that pushed it over the IPO finish line are getting rather long in the tooth and the architecture lead afforded by its SRAM-heavy design is shrinking. Nvidia’s acquihire of Groq gave Feldman’s long-time rival an SRAM-packed inference platform of its own, while others are racing to catch up. From here, we can only speculate, but we’ll hazard a guess that Cerebras' new shareholders are going to want to see new silicon sooner than later. Based on its existing roadmap, we expect WSE-4 will offer a sizable leap in floating point performance, though not necessarily at 16-bit precision. Much of the industry has aligned around lower precision data types like FP8 and FP4. An exaFLOP of ultra-sparse FP4 compute wouldn’t shock us in the least. How useful sparsity would actually be for LLM inference is another matter. LLM inference hasn’t historically benefited much from sparsity, but that’s never stopped chipmakers from advertising sparse FLOPS anyway. We also expect to see Cerebras pack more SRAM into its next wafer scale compute platform, possibly using TSMC’s 3D chip stacking tech to do it. The WSE-3’s 44GB of SRAM capacity remains a limiting factor for what models it can and can’t serve efficiently. A trillion parameter model like Kimi K2 would require somewhere between 12 and 48 of Cerebras' WSE-3 accelerators, depending on how the model weights are stored and how many parameters have been pruned, and so any increase in SRAM capacity would go a long way toward improving the efficiency of its accelerators. More collaborations Alongside new silicon, we can also expect to see more collaborations akin to Cerebras' tie-up with AWS. Earlier this year, AWS announced it would combine its Trainium3 AI accelerators with Cerebras' WSE-3-based systems to speed up its inference platform in much the same way Nvidia is doing with Groq’s accelerators. Cerebras could certainly do something similar with AMD or any other chipmaker. In this sense, Cerebras is in the position to offer its chips as a decode accelerator, which offloads the bandwidth intensive parts of the inference pipeline onto its chips, while other parts handle the compute heavy prompt processing side of the equation. However, Cerebras frames its next collab; its shareholders are going to expect growth. And as the saying goes, the enemy of my enemy is my friend. ®
Categories: Linux fréttir
UK Antitrust Regulator Is Officially Investigating Microsoft Office
The UK's Competition and Markets Authority is opening a formal investigation into whether Microsoft's bundling of Windows, Office, Teams, Copilot, and related products harms competition. Engadget reports: "Our aim is to understand how these markets are developing, Microsoft's position within them and to consider what, if any, targeted action may be needed to ensure UK organizations can benefit from choice, innovation and competitive prices," CMA Chief Executive Sarah Cardell said in a statement published by Reuters.
She also stressed the importance of the investigation by noting that hundreds of thousands of UK residents use business software and Microsoft products. The organization will take a look into the company's cloud licensing practices. The CMA has stated that the inquiry will conclude by February. At that point, Microsoft could get slapped with a strategic market label.
Microsoft says it's "committed to working quickly and constructively with the CMA to facilitate its review of the business software market." A strategic market designation doesn't automatically assume wrongdoing, but will give the CMA more leeway when conducting further interventions.
Read more of this story at Slashdot.
Categories: Linux fréttir
