news aggregator

SOLAI Launches $399 Solode Neo Linux AI Computer

Slashdot - Wed, 2026-05-13 22:00
BrianFagioli writes: SOLAI has launched the Solode Neo, a $399 Linux-based mini PC designed for always-on AI agents, browser automation, and persistent developer workflows. The compact system ships with an Intel N150 processor, 12GB LPDDR5 memory, 128GB SSD storage, Gigabit Ethernet, WiFi, Bluetooth, and a Linux-based operating system called Solode AI OS. The company says the device supports frameworks and tools including Claude Code, OpenAI Codex, Gemini CLI, and Hermes, while emphasizing local control, automation, and privacy-focused workflows running directly from a home network. While SOLAI markets the Solode Neo as an "AI computer," the hardware itself appears aimed more at lightweight automation and cloud-assisted agent tasks than heavy local inference. The low-power Intel N150 should be sufficient for browser automation, scheduling, monitoring, containers, and smaller AI workloads, but the system is unlikely to compete with higher-end local AI hardware designed for running larger models offline. Even so, the idea of a dedicated low-power Linux appliance for persistent AI and automation tasks may appeal to homelab users and self-hosting enthusiasts looking for a simpler alternative to building their own always-on workflow box from scratch.

Read more of this story at Slashdot.

Categories: Linux fréttir

Software Developers Say AI Is Rotting Their Brains

Slashdot - Wed, 2026-05-13 21:00
An anonymous reader quotes a report from 404 Media: On Reddit, Hacker News and other places where people in software development talk to each other, more and more people are becoming disillusioned with the promise of code generated by large language models. Developers talk not just about how the AI output is often flawed, but that using AI to get the job done is often a more time consuming, harder, and more frustrating experience because they have to go through the output and fix its mistakes. More concerning, developers who use AI at work report that they feel like they are de-skilling themselves and losing their ability to do their jobs as well as they used to. "We're being told to use [AI] agents for broad changes across our codebase. There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same," a UX designer at a midsized tech company told me. 404 Media granted all the developers we talked to for this story anonymity because they signed non-disclosure agreements or because they fear retribution from their employers. "We're building a rat's nest of tech debt that will be impossible to untangle when these models become prohibitively expensive (any minute now...)." "I had some issues where I forgot how to implement a Laravel API and it scared the shit out of me. I went to university for this, I've been a software engineer for many years now and it feels like I am back before I ever wrote a single line of code," the software developer at a small web design firm told 404 Media. "It's making me dumber for sure," the fintech software developer added. "It's like when we got cellphones and stopped remembering phone numbers, but it's grown to me mentally outsourcing 'thinking' in general. I feel my critical thinking and ability to sit and reason about a problem or a design has degraded because the all-knowing-dalai-llama is just a question away from giving me his take. And supposedly I tell myself ill just use it for inspiration but it ends up being my only thought. It gives you the illusion of productivity and expertise but at the end of the day you are more divorced from the output you submit than before." A software engineer at the FAANG said: "When I was using it for code generation, I found myself having a lot of trouble building and maintaining a mental model of the code I was working with. Another aspect is that I joined late last year and [the company's] codebase is massive. As a new hire, part of my job is to learn how to navigate the codebase and use the established conventions, but I think the AI push really hampered my ability to do that."

Read more of this story at Slashdot.

Categories: Linux fréttir

Datacenters are having fewer, but bigger failures

TheRegister - Wed, 2026-05-13 20:48
There's good news and bad news when it comes to datacenter uptime. According to a recent report from the Uptime Institute, bit barns have actually gotten more resilient over the past five years. However, the report suggests that those datacenter failures that do occur are lasting longer and costing more to resolve. According to Uptime, half of the operators surveyed reported an impactful or serious outage in the past three years. “This is the lowest level recorded since 2020 and continues a multi-year trend of improving reliability.” However, the report also finds that datacenter operators may be having a harder time adding additional 9s of reliability to their SLAs. According to Uptime, failure rates are falling at a slower pace, suggesting that existing efforts to improve resiliency may be at the point of diminishing returns. This doesn’t appear to be the result of complacency. Instead, analysts suggest that efforts to improve uptime are being offset by greater system complexity and more challenging operating environments caused by the widespread deployment of power dense infrastructure used in AI training and inference. “Higher rack densities, load variability, and operating closer to available power limits may increase the likelihood of cascading failures,” Uptime warns. Shortages of critical physical infrastructure like generators, switchgear, transformers, and other power and cooling systems have driven some operators to adopt second-hand or unproven hardware. “This is believed to have contributed to several failures and incidents at some datacenters,” the report reads. Power-related failures remain the leading cause of major datacenter disruptions, but even this is improving. “While power issues accounted for 45 percent of respondents’ most impactful outages in 2025, this is down from 54 percent in 2024,” the analysts write. However, the analysts also warn that this could change as local grids are stressed by ever larger datacenter deployments. While Uptime doesn’t expect grid power failure to be a primary cause of outages going forward, grid failures can still affect the availability of onsite power. During an outage, datacenters have a limited window to switch over to onsite generators, which can and do fail. Overburdened grids aren’t the only external factors on Uptime’s radar. The industry watchers note that many public outages have been linked to fiber cuts and other networking disruptions. “Digital infrastructure is becoming more distributed with outages originating outside the datacenter, including those tied to power availability, network connectivity or the reliance on external cloud services playing a larger role,” Uptime Analyst Andy Lawrence said in a statement. According to the report, networking-related issues remain the most frequently cited cause for IT disruptions. Even if the datacenter itself doesn’t fail, a bad network configuration can still result in service outages. The good news is that wide adoption of software-defined networking and automated traffic rerouting has helped mitigate this risk. The report found that 20 percent of those surveyed reported having no IT service outages in the past three years, an improvement of nine points from 2024. Software-level resiliency is helping to mitigate localized disruptions, like a fiber cut, by distributing the workload across multiple sites. However, this software resiliency comes with its own challenges, most notably complexity. As we saw with the drone strikes on Amazon’s UAE and Bahrain datacenters, spreading your workloads out across multiple availability zones doesn’t do much good if the failure spreads to multiple sites. While Uptime observed fewer outages in 2025, the report suggests outages may be lasting longer. "While a majority of publicly reported incidents are still resolved within 12 hours (55 percent), the share lasting more than 48 hours has increased for the second consecutive year." As we mentioned earlier, many of these were tied to factors like damaged fiber lines, which Uptime notes occurred more than twice as often as usual. As you might expect, the longer the outage, the more costly it can be, particularly when it concerns highly leveraged AI infrastructure. Uptime reports that one in five outages now exceeds $1 million in total costs, and expects that figure to continue to rise in the coming years. ®
Categories: Linux fréttir

Anthropic butts in to small business, promises help with payroll and other core tasks

TheRegister - Wed, 2026-05-13 20:48
Anthropic is pushing into the small business space with a set of new plug-and-play tools designed for those without a tech team budget, but be warned: Depending on your Anthropic subscription tier, some business data might get sucked up to train Claude. Anthropic announced Claude for Small Business (CSB) on Wednesday, describing the new plugin as a way for SMB owners without AI expertise to automate the basic business tasks they’re saddled with, like payroll, chasing payments, and launching campaigns, that are usually the purview of different departments at the enterprise level. Installation is designed to be dead simple, with Pro, Max, and Teams plan users able to add it as a plugin from the Cowork space in the Claude Desktop app. Skills can then be run using natural language prompts or slash commands outlined here. Users will find “a package of connectors and ready-to-run workflows” inside the CSB plugin, according to the announcement. The aforementioned capabilities of the plugin are part of 15 skills based on common repeatable business tasks, while 15 agentic workflows are also included across areas like finance, operations, marketing and the like. As for the connectors themselves, Anthropic specifically mentions seven of them included in Claude for Small Business: Intuit Quickbooks, PayPal, HubSpot, Canva, Docusign, Google Workspace, and Microsoft 365. An Anthropic spokesperson told The Register in an email that CSB isn’t limited to those connectors, but the skills and workflows rolling out for the plugin were only optimized for those connectors to start. Anthropic told us it chose those products based on the results of a survey of SMB owners, but it plans to add support for more connectors in the coming months. In other words, if you’re a small business owner and you rely on a platform not on that list, you’ll have to keep waiting a while longer if you want to pull that info into Claude. Gotta reach ‘em all It's logical that Anthropic is pushing into the SMB space. The company has seen a leap in business customer subscriptions this year, taking advantage of OpenAI’s slip in the professional user space, and with growth comes the search for new markets to tap. As Anthropic notes in the announcement, and as many analysts have pointed out, AI adoption among SMBs has historically lagged enterprises. That’s to be expected, of course: Enterprises have far more resources to invest in new, unproven technologies and the money to absorb failure when said new tech doesn’t pan out as expected. Anthropic said in its CSB announcement that it specifically designed the new plugin for “those who have historically been last in line for new technology,” or small businesses, in other words. The company also launched an AI fluency for small business course to help SMB owners understand what exactly they’re installing when they tell Claude to install CSB. But if you're taking part, you have to be OK with the idea that Anthropic might train its AI on your business data. Anthropic points out in the announcement that it doesn't train its AI models on the data of its business customers “on our Team and Enterprise Plans.” But as we noted above, Anthropic is marketing CSB to those on Pro, Max, and Teams plans, and the privacy policy page for Pro and Max says something quite different: “We will use your chats and coding sessions (including to improve our models),” the page states. “Chat and coding session data we may use for improving our models includes the entire related conversation, along with any content, custom styles or conversation preferences, as well as data collected when using Claude for Chrome.” Raw content from connectors isn’t included, the page explains, “though data may be included if it’s directly copied into your conversation with Claude.” This only applies to users who, under regular circumstances, have chosen to allow Anthropic to use chats to improve Claude, but it likely won’t shock any El Reg readers to learn that permission is on by default – Anthropic told us that it's on users to turn it off. If you're copacetic with all this, you can start using CSB today - there’s no extra cost associated with installing the tool for anyone on a Pro, Max, or Teams plan. ®
Categories: Linux fréttir

Bug hunter tracks down three massive MCP flaws and one vendor won't fix theirs

TheRegister - Wed, 2026-05-13 20:17
Security vulnerabilities in MCP servers for three popular database projects could let attackers execute unintended SQL statements on Apache Doris, exfiltrate sensitive metadata from Alibaba RDS, and potentially take over Apache Pinot instances exposed to the internet. Alibaba, meanwhile, declined to patch its flaw. Apache issued a patch and a CVE tracker for Doris MCP, and there’s an open ticket in the MCP Pinot Github repository for the flaw, we're told. However, Alibaba decided not to patch the vulnerability in RDS MCP, according to Akamai security analyst Tomer Peled, who wrote about the flaws on Tuesday and will present his full research next month at x33fcon. MCP, or Model Context Protocol, is an open source protocol originally developed by Anthropic that allows LLMs, AI applications, and agents to connect to external data, systems, and one another. While security issues are never a good thing - and they are especially concerning when they exist in a server sitting between an AI agent and a production database, these in particular point to a larger problem in the way MCPs are developed. “There is missing or faulty security validation between the MCP server and its back end,” Peled wrote, adding that these security “gaps will become high-value targets for attackers and we expect more of these issues to surface.” Here’s a closer look at all three, starting with the flaw that has since been fixed and assigned a CVE. Apache Doris is a high-speed analytics and search database with more than 10,000 mid- and large-enterprise users. Its MCP server allows AI agents to interact with and perform operations on Doris instances. This includes SQL queries or retrieving table and schema metadata - and foreshadows the found flaw: CVE-2025-66335, a SQL injection vulnerability, that affects Apache Doris MCP Server versions earlier than 0.6.1. When an MCP tool is called, the server’s “exec_query” function fails to validate one of the five parameters (the db_name parameter) before constructing the SQL query. This means an attacker can invoke the function and inject malicious SQL through the db_name parameter, which gets prepended to the beginning of the final SQL statement. Plus, the SQL validator only checks the first portion of the query, so all it sees is the attacker’s directive. “As a result, any attacker that gains access to a client connected to the Doris MCP server can execute arbitrary commands on the victim’s Apache Doris instance,” Peled said. Apache issued a patch in December to fix this flaw. The second issue, an authentication validation bypass in Apache Pinot MCP, can also lead to SQL injection attacks and full database takeover. Apache Pinot is another super-fast analytics database, and StarTree’s MCP integration for Pinot before v2.0.0 allowed users to run queries directly from their AI agent against their Pinot instance. The open-source project uses HTTP as the transport layer without requiring any type of authentication. This exposes the endpoint to remote attackers who can reach it, allowing them to invoke MCP tools, including those used for SQL execution. “In environments where the MCP endpoint is reachable externally, this behavior allows unauthenticated attackers to execute queries against the Pinot instance, which can allow a full remote takeover of the database,” Peled wrote. StarTree has since added OAuth as an authentication option when using HTTP, which he says lowers the threat of SQL injection (but it still exists in the code), and Apache has also opened a security issue in the MCP Pinot github repository. Pinot MCP v1.1.0 and earlier versions are affected. Neither Apache nor StarTree responded to The Register’s requests for comment. The third security flaw, an information disclosure issue in the Alibaba RDS MCP server, also stems from the server not authenticating users before invoking the retrieval-augmented generation (RAG) MCP tool, which allows AI models to connect with and query databases. This means “any client able to reach the MCP endpoint can issue requests to the server without any query validation,” according to Peled. “The vector index may contain table names, schema definitions, or other potentially sensitive metadata, and unauthenticated attackers can exfiltrate this data with little or no effort." All versions of Alibaba RDS MCP are affected by this vuln. The bug hunter says that he reported the issue to Alibaba in November, and the cloud giant told him the issue is “not applicable” for a fix - so it’s still in the codebase. Akamai also reported this inaction to the CERT Coordination Center (CERT/CC). Alibaba did not respond to The Register’s inquiries. Peled said that the threat-hunting team, upon starting this investigation, assumed that there would be some baseline security specification for all MCP servers. Turns out they were wrong, and as the research found, flaws like SQL injection, missing authentication, and insufficient query validation exist in the code. “This means that more attention should be given not just to the specification but also to the best security practices guides when developing secure MCP servers,” he wrote.®
Categories: Linux fréttir

Windows Update Is Getting Automatic Rollbacks For Faulty Drivers

Slashdot - Wed, 2026-05-13 20:00
Microsoft is adding a Windows Update feature called Cloud-Initiated Driver Recovery that can automatically roll back faulty drivers to a previously known-good version without waiting for hardware makers or users to fix the problem manually. PCWorld reports: The way faulty drivers work today is that the hardware partner is responsible for pushing an updated driver, or the end user is responsible for manually uninstalling the problematic driver. "This creates a gap where devices may remain on a low-quality driver for an extended period," says the blog post. With Cloud-Initiated Driver Recovery, Microsoft will be able to remotely trigger a rollback of the faulty driver to a previously "known-good" version of the driver via the Windows Update pipeline. Microsoft says that testing and verification of Cloud-Initiated Driver Recovery will continue until August this year, aiming to deliver this feature to Windows PCs starting in September.

Read more of this story at Slashdot.

Categories: Linux fréttir

See through local AI lies with Irish eyes

TheRegister - Wed, 2026-05-13 19:35
You may find yourself living with a new tech stack. And you may find yourself in an unfamiliar world. And you may find yourself behind a keyboard and screen, with a mouse and inscrutable AI. And you may ask yourself, how do I vet this? The Irish Council for Civil Liberties (ICCL) Enforce project has a suggestion: install its Verity MCP server. "LLMs [large language models] confidently claim things that are manifestly untrue," the advocacy organization explains. "Enforce has developed Verity, a tool that helps minimise false claims and fake sources from self-hosted LLMs." An MCP (Model Context Protocol) server provides AI models with access to external tools, data, and services. The Verity MCP server offers access to a set of smaller models that will try to assess the accuracy of a primary local LLM, something more people have begun to explore in response to rising prices at cloud AI providers, availability issues, and privacy concerns. Verity is not simply an LLM-as-judge setup in which one LLM evaluates the output of another. Rather, it's a set of seven layers designed to review model output. The system involves: strict rules for fact sourcing; a strong critic LLM that differs from the primary model family; a small critic LLM similar to the strong critic but from different training data; an encoder transformer trained on entailment labels; a regex evaluator; a stochastic re-sampler for catching low-confidence guesses; and a logprob analyser that checks token entropy. The reference build assumes a 2021 PC, with an Nvidia RTX 5070 Ti (16 GB, 2025) for the primary model (Qwen 3.5 9B, Q4_K_M), and an AMD Radeon RX 5700 XT (8 GB, 2019) for the critic models (IBM Granite 3.2 8B & 2B, Q4_K_M). The hardware recommendations call for a system with two GPUs, but that's to allow concurrent delivery of a second opinion from the Verity checker. On a machine with one GPU, like a MacBook Pro or Mac mini, the system can be configured to evaluate the primary LLM's output after the fact. "Even the biggest LLMs inevitably produce false claims," said Dr Johnny Ryan, director of ICCL Enforce, in an email to The Register. "This is a feature of how an LLM works. But this is dangerous when people start to put their faith in these systems. LLMs are being incorporated into judicial systems, public services, corporate life, the military, and people’s private decision making. For example, Google’s automated search summaries will confidently claim things that are not proven by the sources they cite." Ryan said if people intend to rely on LLMs for factual answers, they need a verification process. "One beauty of running LLMs on your own hardware is that you can take steps against this," he said, adding that local operation provides an opportunity to enlist old hardware to offer a second opinion alongside the main model output. "So for example, when your machine is working on an answer to a question you have asked, you can have an old graphics card that independently produces a second opinion using a different LLM on the same machine, and the two can then debate at the end without significantly slowing down the process," he explained. The downside of an all-local approach is that models have a training cutoff and won't be very useful for checking facts established after that date unless armed with tools for online data fetching. If you can get Verity up and running, that shouldn't be a problem. ®
Categories: Linux fréttir

Fragnesia Made Public As Latest Linux Local Privilege Escalation Vulnerability

Slashdot - Wed, 2026-05-13 19:00
A new Linux local privilege escalation flaw called Fragnesia has been disclosed as a Dirty Frag-like vulnerability, allowing arbitrary byte writes into the kernel page cache of read-only files through a separate ESP/XFRM logic bug. Phoronix reports: Proof of concept code for Fragnesia is already out there. There is a two-line patch for addressing the issue within the Linux kernel's skbuff.c code. That patch hasn't yet been mainlined or picked up by any mainline kernel releases but presumably will be in short order for addressing this local privilege escalation issue. More details can be found here.

Read more of this story at Slashdot.

Categories: Linux fréttir

LinkedIn Planning To Lay Off 5% of Staff In Latest Tech-Sector Cuts

Slashdot - Wed, 2026-05-13 18:00
An anonymous reader quotes a report from Reuters: LinkedIn planned to inform staff of layoffs on Wednesday, two people familiar with the matter told Reuters, in a widening of technology sector cuts this year. The Microsoft-owned social network plans to cut about 5% of its headcount as it reorganizes teams and focuses personnel on areas where its business is growing [...]. LinkedIn employs more than 17,500 full-time workers globally, its website says. Reuters was unable to determine the teams affected. The cuts come as revenue at LinkedIn, which sells recruiting tools and subscriptions, rose 12% in the just-ended quarter from a year prior, in an acceleration of growth in 2026, according to Microsoft's securities filings. The layoff rationale was not for artificial intelligence to replace jobs at LinkedIn, one of the people told Reuters. The specter of AI-fueled disruption has nonetheless hung over software incumbents and workers generally.

Read more of this story at Slashdot.

Categories: Linux fréttir

KDE Receives $1.4 Million Investment From Sovereign Tech Fund

Slashdot - Wed, 2026-05-13 17:00
The German Sovereign Tech Fund has invested 1.2 million euros ($1.4 million USD) in KDE Plasma technologies to help strengthen the structural reliability and security of the desktop environment's core infrastructure, including Plasma, KDE Linux, and the frameworks underlying its communication services. Longtime Slashdot reader jrepin shares an excerpt from the announcement: For 30 years, KDE has been providing the free and open-source software essential for digital sovereignty in personal, corporate, and public infrastructures: operating systems, desktop environments, document viewers, image and video editors, software development libraries, and much more. KDE's software is competitive, publicly auditable, and freely available. It can be maintained, adapted, and improved in-house or by local software companies. And modifications (along with their source code) can be freely distributed to all users and departments within an organization. KDE will use Sovereign Tech Fund's investment to push its essential software products to the next level, providing every individual, business, and public administration with the opportunity to regain their privacy, security, and control over their digital sovereignty. Slashdot reader Elektroschock also shared a statement from Fiona Krakenburger, Technical Director at the Sovereign Tech Agency. "We have long invested in desktop technologies for a reason: they are the primary way people access and use digital services in everyday life," says Krakenburger. "The desktop holds personal data and mediates nearly every service we depend on, from booking the next medical appointment, to education, to the way we work. We are investing in KDE because it is one of the two major desktop environments used across Linux and plays a key role in how millions of people experience open technology. Strengthening KDE's testing infrastructure, security architecture, and communication frameworks is how we invest in the resilience and reliability of the core digital infrastructure that modern society depends on."

Read more of this story at Slashdot.

Categories: Linux fréttir

Dissatisfied: Three-fourths of AI customer service rollouts are a letdown

TheRegister - Wed, 2026-05-13 16:53
If you're thinking you can replace your human call center staff with a server farm of bots, think again. Nearly three-quarters of enterprises that deploy AI customer communications agents later roll them back or shut them down, according to new research suggesting the systems are far harder to manage reliably in production than the AI hype implied. Swedish comms-as-a-service firm Sinch surveyed more than 2,500 AI decision makers from various countries and industries for its AI Production Paradox study. The starkest finding is undoubtedly the 74 percent rollback or shutdown rate for deployed AI customer communications agents tied to governance failures, but that’s not the only sign enterprise AI deployments are falling short of expectations. AI rollback rates, which Sinch told us specifically refer to AI projects that were deployed and pulled from live service rather than projects that failed before launch, actually rise to 81 percent among organizations that it describes as having “fully mature guardrails.” That, says Sinch Chief Product Officer Daniel Morris, suggests governance alone is not fixing the problem. "The most advanced organizations aren't failing less; they're seeing failures sooner. Higher rollback rates reflect better monitoring and control, not weaker performance," Morris said in a press release. “If governance was the fix, the most mature teams would roll back less, not more. Our data points to a deeper issue.” According to the findings, 84 percent of AI engineering teams are spending at least half their time on safety infrastructure, leaving little time to develop AI. This is exacerbated by the fact that most firms said spending on AI trust, security, and compliance ranks ahead of AI development itself. “When 75% put trust, security, and compliance in that top three — ahead of AI development itself at 63% — that’s a finding about where the priority sits within their AI customer communications programs,” a Sinch spokesperson told us in an email. In other words, it seems like most organizations realize that their biggest issue with AI isn’t getting it working properly - it’s getting it to just work safely in the first place. “The operational cost of running AI safely at scale is much larger than most organizations expect,” the Sinch representative explained. The numbers don’t change based on organizational size or budget, either, Sinch told us. “The rollback rate holds consistently across every region and every industry in the study, which suggests size isn’t a meaningful protective factor,” the company said. “Rollback isn’t a symptom of under-investment or being too small to afford proper guardrails.” Of course, as a business communications service provider, Sinch linked its results back to AI customer service agents not being properly deployed on comms infrastructure designed for AI agents, a problem it’s naturally positioned to offer a fix for. Regardless, that three-quarter rollback figure doesn’t seem too out of place when you consider recent customer service automation news. As we’ve reported on multiple occasions, replacing customer service staff with AI hasn’t gone to plan for many businesses. Gartner said in June 2025 that half of organizations expecting AI to significantly reduce customer service headcount would abandon those plans by 2027. Sinch’s numbers suggest the problem may extend beyond staffing cuts to the AI agents themselves. Not that far-fetched when Gartner was already warning last year that fully agentless contact centers were not practical in the real world. "Our vendor evaluations reveal that a agentless contact center is not yet technically feasible, nor is it operationally desirable," Brian Weber, VP analyst in the Gartner Customer Service & Support practice, told The Register, adding that unexpected costs and unintended results were contributing to abandonment plans - just like what Sinch is reporting now. ®
Categories: Linux fréttir

Utah mega datacenter could dump 23 atomic bombs worth of energy per day

TheRegister - Wed, 2026-05-13 16:36
A proposed mega-scale datacenter in the US state of Utah has caused controversy after a physics professor estimated that the facility and its associated power generation could dump 23 atomic bombs' worth of energy per day. But the real question is whether it will actually ever get built. The datacenter is part of the Stratos Project Area in Box Elder County, Utah, overseen by the Military Installation Development Authority (MIDA), a state agency straddling the military, local government, and private developers. Creation of the Stratos Project Area, covering about 40,000 acres of land, was given the go-ahead in a May 4 announcement from the Box Elder County Commission, after delaying a vote amid residents’ concerns. At full buildout, the proposed Stratos campus could require up to 9 GW of power, making it one of the largest datacenter developments in the world. Meta’s planned Hyperion cluster is aiming for 5 GW, for example, while the first facilities hitting 1 GW are only expected to come online this year. For comparison, 9 GW is roughly comparable to New York City’s average electricity demand. Utah State University physics professor Dr Rob Davies estimated that the proposed Stratos campus and its associated natural gas power plant could dump energy equivalent to 23 atomic bombs per day into the surrounding Hansel Valley. Davies’ preliminary analysis said this could raise daytime temperatures by 2°F to 5°F (1°C to 3°C) and nighttime temperatures by 8°F to 12°F (4°C to 6°C), potentially causing serious ecological impacts in the high-desert valley. Not surprisingly, many have questioned Davies’ figures, especially as he doesn’t publish his math, with the topic debated on forums such as Reddit. However, even skeptics such as Andy Masley, a writer and researcher who claims to have taught high school physics, find that the math broadly checks out, so long as the bomb you measure it by is the one dropped on Hiroshima, which at about 15 kilotons, was much smaller than modern weapons. The key thing to bear in mind, however, is that an atomic bomb releases its energy all at once in the blink of an eye, whereas in the datacenter’s case, the release of the heat will be spread across 24 hours. However, the point Davies was making is that this will be extra energy being pumped into what is already a fragile desert environment, and the figures “strongly indicate the need for thorough and independent ecological assessment” of the impact of the Stratos Project. A recent study by a team at the University of Cambridge also suggested that datacenters can create heat islands, raising surrounding temperatures by several degrees at distances of up to 10 km (over 6 miles). This was met with skepticism by Omdia Senior Research Director Vlad Galabov, who told The Register that “Simple physics suggests that even very large datacenters contribute only a small additional heat flux when spread over kilometres.” The Stratos Project is intended as a long-term scheme, with a multi-year buildout, meaning that it may not reach full capacity for a decade, if at all. Reports suggest that the finance industry is becoming increasingly concerned about the level of borrowing that is needed to continue this datacenter build boom. The Financial Times reported recently that banks are looking for new ways to offload risks, with JPMorgan Chase and Morgan Stanley trying to distribute datacenter-related deals across a broader range of investors. CoStar Group also warned that construction costs for modern bit barn campuses have surged, thanks to massive upfront spending on land, power support systems and specialized construction, leading to large projects running into the billions of dollars. According to some estimates, building 1GW of AI datacenter capacity costs around $35 billion, with Nvidia’s figures said to peg the costs at $50 to $60 billion. If correct, the developers of the Stratos facility will be looking at costs in excess of $300 billion. Alan Howard, principal analyst for colocation and DC building at Omdia, puts the figure slightly lower, but still sees problems ahead. “Thinking about a rough $8m per MW, that would put datacenter construction at ~$8 billion for just building construction including power and cooling. The power generation and IT equipment would be on top of that, so the number would be over $100 billion,” he told The Register. “What’s important here is that the money comes from different sources: Stratos pays for site development; other companies will likely pay for building construction; even other companies will build and operate the onsite power generation; and even other companies will buy and operate the IT equipment." "The tricky part is the tepid climate for funding these big projects. While there will be multiple companies providing funding for different pieces, the debt financing underwriting process will look at the broader project as part of their risk assessment,” he stated. Developers in the US and elsewhere are also facing increasing opposition to datacenter projects from local communities, with projects being delayed or entirely canceled in response. ®
Categories: Linux fréttir

Mystery Microsoft bug leaker keeps the zero-days coming

TheRegister - Wed, 2026-05-13 16:16
The anonymous security researcher who has already maliciously exposed three Windows zero-days this year has revealed two more, dropping them just after Microsoft's monthly Patch Tuesday update. Nightmare-Eclipse, or Chaotic Eclipse, depending on which of their aliases you prefer, released details about YellowKey and GreenPlasma - respectively a BitLocker bypass and a privilege escalation flaw, handing SYSTEM access to attackers. Experts speaking to The Register warned that both vulnerabilities present serious security concerns, especially since Nightmare-Eclipse released substantial technical information about exploiting them. Nightmare-Eclipse described YellowKey as "one of the most insane discoveries I ever found." They provided the files, which have to be loaded onto a USB drive, and if the attacker completes the key sequence correctly, they are granted unrestricted shell access to a BitLocker-protected machine. When it comes to claims like these, we usually exercise some caution, as this bug requires physical access to a Windows PC. However, seeing that BitLocker acts as Windows' last line of defense for stolen devices, bypassing the technology grants thieves the ability to access encrypted files. Rik Ferguson, VP of security intelligence at Forescout, said: "If [the researcher's claim] holds up, a stolen laptop stops being a hardware problem and becomes a breach notification." Despite the physical access requirement, Gavin Knapp, cyber threat intelligence principal lead at Bridewell, told The Register that YellowKey remains "a huge security problem for organizations using BitLocker." Citing information shared in cyber threat intelligence circles, he added that YellowKey can be mitigated by implementing a BitLocker PIN and a BIOS password lock. Nightmare-Eclipse hinted at YellowKey also acting as a backdoor, allegedly injected by Microsoft, although the people we spoke to said this was impossible to verify based on the information available. The researcher also published partial exploit code for GreenPlasma, rather than a fully formed proof of concept exploit (PoC). Ferguson noted attackers need to take the code provided by the researcher and figure out how to weaponize it themselves, which is no small task: in its current state it triggers a UAC consent prompt in default Windows configurations, meaning a silent exploit remains a work in progress. Knapp warned that these kinds of privilege escalation flaws are often used by attackers after they gain an initial foothold in a victim's system. "These elevation of privilege vulnerabilities are often weaponized during post-exploitation to enable threat actors to discover and harvest credentials and data, before moving laterally to other systems, prior to end goals such as data theft and/or ransomware deployment," he said. "Currently, there is no known mitigation for GreenPlasma. It will be important to patch when Microsoft addresses the issue." Four, five… and more? YellowKey and GreenPlasma are the latest in a series of five Microsoft zero-day bugs the researcher has exposed this year. When Nightmare-Eclipse released BlueHammer (CVE-2026-32201, 6.5) - patched by Microsoft in April - they were described as a disgruntled researcher who has since been rumored to be a former Microsoft employee. According to their maiden blog post under the Chaotic Eclipse alias, the bug leak began after an alleged violation of trust. "I never wanted to reopen a blog and a new GitHub account to drop code," they wrote. "But someone violated our agreement and left me homeless with nothing. They knew this will happen and they still stabbed me in the back anyways, this is their decision not mine." In early April, the researcher leaked proof-of-concept code for Windows Defender exploits they called RedSun and UnDefend - another admin privilege escalation bug and denial-of-service flaw, respectively - as well as BlueHammer. Both RedSun and UnDefend remain unfixed, and according to Huntress, the proof-of-concept code released was quickly picked up and abused in real-world attacks. Ferguson described the exposure of YellowKey and GreenPlasma as the latest in an escalating, retaliatory campaign against Microsoft, and warned of more coming. "Prior releases include BlueHammer and RedSun, both of which attracted serious community attention and real forks," he said. "The same post linking yesterday's releases warns of another Patch Tuesday surprise and hints at future RCE disclosures. They claim to have a dead man's switch with more ready to go. This researcher has followed through on every prior threat." ®
Categories: Linux fréttir

Harvard Votes On Limiting 'A' Grades

Slashdot - Wed, 2026-05-13 16:00
Harvard faculty are voting on a proposal (PDF) to curb grade inflation by limiting solid A grades to 20% of students in a class, plus four additional A's per course. Axios reports: Grade inflation is at a tipping point at Harvard. A move to make A grades harder to come by at one of the world's leading universities could influence grading debates at peer institutions. Solid A's account for nearly two-thirds of all undergraduate letter grades. That's up from roughly a quarter 20 years ago. More than 50 members of last year's class graduated with perfect GPAs. [...] Faculty are voting on three separate provisions. Each requires a simple majority to pass. A cap to limit solid-A grades to 20% of enrolled students in a class, plus four additional A's per course. Changes to how internal honors are calculated, moving from traditional grade point average scoring to an average percentile rank. Allowing courses to use new "satisfactory" or "unsatisfactory" marks with a "satisfactory-plus" distinction. A pre-vote faculty poll showed around 60% of the 205 respondents favored the 20-plus-four formula over an alternative. Supporters of the cap argue it's intentionally modest as it places no restrictions on A-minuses. The four-grade buffer is designed to protect small seminars where a higher proportion of students may succeed. [...] If passed, changes would take effect in fall 2027, followed by a mandatory three-year review.

Read more of this story at Slashdot.

Categories: Linux fréttir

Rust stalks IBM mainframes, but only in nightly form

TheRegister - Wed, 2026-05-13 15:21
IBM's effort to bring in-kernel Rust to its mainframe platform has taken a step forward, although anyone hoping to use it on production iron will need to be comfortable with a nightly Rust compiler for now. Engineer Jan Polensky has submitted a patch series titled "s390: enable Rust support and add required arch glue." If accepted, it will allow Rust code to be used in the Linux kernel on IBM mainframe hardware, which the kernel still refers to as s390 after the generation of IBM mainframe kit introduced in 1990. He notes: For now, using a nightly build of the Rust compiler does not sound to us like the sort of thing many conservative mainframe shops are likely to embrace with enthusiasm, but even big new features have to start somewhere. This is a significant step. When Rust was introduced into the kernel in 2022, The Register mentioned a problem that we rarely see raised elsewhere: while the kernel is generally compiled with GCC, the standard Rust compiler, rustc, is based on LLVM instead. Wikipedia has a list of LLVM backends, and although there are a growing number, it's a shorter list than GCC's 48. There is an experimental GCC front-end for Rust but it's not ready for the prime time yet. The Linux kernel itself has supported compilation using LLVM since kernel 6.9 over two years ago. At the moment, the kernel development team is still working on version 7.1, which at the time of writing is still on release candidate 3 – so relatively early days. Last month, we reported on its new NTFS driver and the removal of some fairly ancient hardware support. The final version of Linux 7.1 will probably appear about halfway through 2026, meaning that kernel 7.2 is still quite far off. It might be in time to appear in Ubuntu 26.10 – but then again, we suspect that very few IBM mainframe customers use interim Ubuntu releases. ®
Categories: Linux fréttir

Meta Employees Launch Protest Against Mouse-Tracking Tech At US Offices

Slashdot - Wed, 2026-05-13 15:00
An anonymous reader quotes a report from Reuters: Meta employees distributed flyers at multiple U.S. offices on Tuesday to protest the company's recent installation of mouse-tracking software on their computers, according to photos of the pamphlets seen by Reuters. The flyers, which appeared in meeting rooms, on vending machines and atop toilet paper dispensers at the Facebook owner's offices, encouraged staffers to sign an online petition against the move. "Don't want to work at the Employee Data Extraction Factory?" they asked, according to the photos seen by Reuters. [...] The pamphlets and the petition both cite the U.S. National Labor Relations Act, saying "workers are legally protected when they choose to organize for the improvement of working conditions." In the UK, a group of Meta employees has started organizing a drive for unionization with United Tech and Allied Workers (UTAW), a branch of the Communication Workers Union. The employees set up a website to recruit members using the URL "Leanin.uk," a reference to former Chief Operating Officer Sheryl Sandberg's best-selling book encouraging women to seek equal footing in the workplace. "Meta's workers are paying the price for management's reckless and expensive bets. While executives chase speculative AI strategies, staff are facing devastating job cuts, draconian surveillance, and the cruel reality of being forced to train the inefficient systems being positioned to replace them," said Eleanor Payne, an organizer with UTAW. "If we're building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them -- things like mouse movements, clicking buttons, and navigating dropdown menus," said a statement Meta issued earlier.

Read more of this story at Slashdot.

Categories: Linux fréttir

Royal Household seeks £3M finance system fit for a King

TheRegister - Wed, 2026-05-13 14:33
The UK's Royal Household plans to spend £3 million ($4 million) on a new finance system to replace one that is more than 15 years old, the Buckingham Palace-based organization said in a procurement notice published on May 7. King Charles III gets to announce the British government's overall plans at the start of each parliamentary term but also has his own miniature civil service in the shape of the Royal Household, described by the procurement notice as a "public undertaking (commercial organization subject to public authority oversight)." The household received £86.3 million ($116 million) in taxpayers' money in 2024-25, known as the Sovereign Grant, and generated a further £21.5 million ($29 million) from activities including tours of Buckingham Palace. The King also receives private income from the Duchy of Lancaster, an estate held in trust for the sovereign, while the Royal Collection Trust, a charity, cares for the royal art collection and runs visitor attractions including galleries at Buckingham Palace and Holyroodhouse in Edinburgh. According to the procurement notice, the Royal Household contacted suppliers on various Crown Commercial Service (now Government Commercial Agency) frameworks for information last year. It now plans to award a contract for financial software, implementation, training, and support, initially for five years from September 30, 2026, with a possible two-year extension, covering the household and a number of affiliated organizations. To manage this project, last year the household's Privy Purse and Treasurer's Office – its back office division that runs finance, human resources, technology and facilities management – advertised for a finance systems technical project manager, a two-year fixed contract paying £60,000 to £65,000 annually. According to the job advert, the role requires "a proven track record of delivering successful ERP or finance system projects" and the ability to "tailor your style to suit technical and non-technical audiences alike." As well as a 15 percent pension contribution, perks of leading the installation of His Majesty's new financial software include free entry to royal locations and 20 percent off at Royal Collection Trust gift shops. ®
Categories: Linux fréttir

Microsoft aims to speed Windows with 'leap forward' in WinUI 3 perf

TheRegister - Wed, 2026-05-13 13:49
Microsoft claims to have achieved a "leap forward" in performance for WinUI 3, the current native framework for Windows apps, with a 25 percent improvement for the parts of File Explorer coded using this framework. Software engineer lead Beth Pan posted figures for the WinUI portion of File Explorer, showing 41 percent fewer memory allocations and 45 percent fewer function calls. She added that some optimizations "involve small or large breaking changes," so they will be opt-in at first for developers using the framework. The plan is for the optimizations to become the default in future versions of WinUI and the Windows App SDK, with opt-out available when needed. The new optimizations are part of a push to make Windows more responsive. In March, Windows boss Pavan Davuluri promised to improve the quality of the operating system, including a commitment to a "faster and more dependable File Explorer." His post noted that Microsoft intends to "move more experiences to WinUI 3" for faster responsiveness. Pan's post is bittersweet for developers. Performance issues with WinUI 3 have been well known for years. Although Microsoft calls it a native framework, that is a stretch. WinUI 3 is based on WinRT (Windows Runtime), a component interface first used in Windows 8 that sits between application code and the underlying Win32 API, which has a better claim to being native. An advantage of WinUI 3 is its support for Fluent UI, the Windows design system. Developers using WinUI 3 get the Windows 11 look and feel, but not the best performance. "WinUI 3 is currently measurably slower than both WPF [Windows Presentation Foundation] and UWP [Universal Windows Platform]… this is NOT OK," said one comment. Another said that "you can't build a WinUI app and call it smooth at the same time." Component vendor DevExpress has also posted about WinUI 3 performance issues. The company stated that WinUI component architecture has the potential for fast rendering and animation, but that "unfortunately, each action within a component requires WinRT interop, which is slow." These concerns undermine Davuluri's hope that using more WinUI 3 will fix Windows 11 performance, unless the framework itself is improved, as Pan now claims. Another longstanding gripe among Windows devs is that Microsoft's developer division has created frameworks that the Windows and Office teams have not always adopted consistently. Internal tensions go back many years. Some may still remember early builds of "Longhorn," the code name for Windows Vista, having to be reworked before Vista's eventual release in 2007 because of performance issues with .NET. This caused distrust of .NET in the Windows team. "What you need to do is actually use your framework across the company," said another comment. Pan replied, insisting "that's the push." This is exactly what developers using WinUI 3 want to hear, but the long and tangled history of Windows UI frameworks suggests that a consistent and enduring company-wide approach is unlikely. ®
Categories: Linux fréttir

SpaceX sets date for Starship test that asks: Did we break anything in the upgrade?

TheRegister - Wed, 2026-05-13 13:29
SpaceX has named the earliest date for the next Starship launch – May 19. The company has already completed a Wet Dress Rehearsal (WDR) so the next step is to cross fingers and launch the stainless steel behemoth. The launch window for the 12th flight test of Starship opens at 5:30 pm CT, and, as with previous test flights, the vehicle will be on a suborbital trajectory. The launch, from an entirely new pad, will be the first of SpaceX's third-generation Starship and will validate that there have been no inadvertent regressions. Raptor 3 engines power the Starship and Super Heavy Booster. On the booster, SpaceX has reduced the number of grid fins used during recovery from four to three, increased their size by 50 percent, and added a new catch point, although there are no plans to catch the booster on the next flight – it is destined for the Gulf of Mexico. SpaceX wrote: "As this is the first flight test of a significantly redesigned vehicle, the booster will not attempt a return to the launch site for catch." For Starship, changes include a redesign of the propulsion system, increased propellant tank size, and improvements to the reaction control system. The Starlink dispenser mechanism has also been updated to increase satellite deployment speed – 22 mass simulators will be carried on this mission. Ultimately, this is a considerably enhanced rocket, with more powerful engines, and a new launchpad and tower. Hence the need to demonstrate that nothing has been broken along the way. The objectives are therefore familiar. SpaceX will call the flight of the booster a success if there's a successful launch, ascent, stage separation, boostback burn, and finally a landing burn in the Gulf of Mexico. Starship's objectives are to deploy Starlink simulators, which will also be on a suborbital trajectory to burn up harmlessly, restart a single Raptor engine, and survive a controlled re-entry, although the vehicle will not be recovered for reuse this time around. Two of the Starlink simulators will be able to scan and capture imagery of the vehicle's heat shield, allowing engineers to assess its readiness for return to the launch site on future missions. In addition to painting some heat shield tiles white to simulate missing tiles for the imaging test, a single tile has been removed to see what happens to the surrounding tiles. Finally, the vehicle will attempt a dynamic banking maneuver to mimic the trajectory of a future return-to-Starbase mission. ®
Categories: Linux fréttir

Greater Manchester still says no to NHS data platform with Palantir at its heart

TheRegister - Wed, 2026-05-13 13:07
One of the UK's biggest health regions has doubled down on its decision not to join the NHS Federated Data Platform (FDP), owing to concerns over its lead supplier, Palantir, and a lack of evidence for the technology's benefits. Greater Manchester Integrated Care Board (ICB), which manages health services for 2.8 million people, deferred a decision on whether to sign up to the FDP last year. It is the only ICB in England to do so. A board meeting in May 2025 heard that NHS England had not addressed the ICB's concerns around risks. The ICB added that Greater Manchester's capability in data analytics was greater than what the FDP currently offered. In a November meeting, Greater Manchester ICB said it would review its position. However, a recent Freedom of Information response said that review was now off the table, and the ICB would stick with its decision not to join the FDP. "It was proposed that a paper would be produced in due course to guide a review of the ICB position," the response said. "This paper has not been produced yet and work has not started on this paper because it's clear that the public concerns have heightened rather than diminished since the deferral decision has been taken and there does not appear to be any compelling evidence that the value proposition for NHS GM from FDP has materially changed in favour of adoption." NHS England has been offered the opportunity to comment. The FDP was created by Palantir under a much-criticized £330 million procurement for a seven-year contract awarded in November 2023. NHS England signed the deal after it awarded £60 million to the vendor without competition during the pandemic. The FDP is designed to improve information flow through various NHS organizations and reduce the backlog in non-urgent "elective care," which skyrocketed during the COVID-19 outbreak. NHS England confirmed that Palantir staff could access patient data following a change in policy, provoking outrage from those concerned about the US spy-tech firm's position at the heart of NHS data after a series of outspoken political positions from its leadership. Last month, the junior minister responsible for the FDP said the government would consider using a break clause in the FDP contract to remove Palantir, although he defended the system's performance. Liberal Democrat MP Martin Wrigley claimed the NHS was locked into the Palantir contract and owned none of the software or intellectual property resulting from it. ®
Categories: Linux fréttir

Pages

Subscribe to www.netserv.is aggregator