Linux fréttir

NASA fleshes out Artemis III, the Moon mission that won't go to the Moon

TheRegister - Thu, 2026-05-14 11:59
Artemis III is currently targeted for late 2027, and NASA has shared some of its plans for the mission, though exactly how SpaceX and Blue Origin will participate remains unclear. The mission to low Earth orbit will be launched with a "spacer" rather than the Interim Cryogenic Propulsion Stage (ICPS) that would otherwise be used on lunar voyages to send the Orion capsule to the Moon. According to NASA, the crew will spend more time in the Orion capsule than the Artemis II astronauts to further test the spacecraft's life support system. NASA will also demonstrate the docking system alongside an upgraded heat shield. As for the lunar lander, NASA has remained tight-lipped, only saying that operations would be "informed by Blue Origin and SpaceX capabilities." However, the agency stated that astronauts could potentially enter "at least one lander test article." There might also be an opportunity to evaluate the interfaces of Axiom's AxEMU spacesuit. There could, in theory, be three launches during the Artemis III mission: one for Orion, atop the SLS (the core stage of which is in NASA's Vehicle Assembly Building), with separate launches for SpaceX's Starship human landing system pathfinder and Blue Origin's Blue Moon Mark 2 landing system pathfinder. Without an ICPS, the European-built Orion service module will provide propulsion to circularize the spacecraft's orbit. Artemis III was supposed to mark a crewed return to the lunar surface, but was changed earlier this year to be a test of commercial lunar lander technologies in low Earth orbit. Jeremy Parsons of NASA's Exploration Systems Development Mission Directorate called the development a "stepping stone" to a lunar landing, saying: "For the first time, NASA will coordinate a launch campaign involving multiple spacecraft integrating new capabilities into Artemis operations." Kind of. In 1965, NASA launched the first crewed flight of the Gemini program. Several stages in the program involved launching another spacecraft – the Agena target vehicle – followed by a crewed Gemini launch to demonstrate rendezvous and docking techniques. The final crewed flight, Gemini 12, was launched less than two hours after the Agena [PDF]. While NASA is unlikely to manage that sort of quick-fire launch cadence, the agency will also expect to avoid a repeat of the infamous Gemini 8 incident, in which a stuck thruster almost resulted in the loss of astronauts David Scott and Neil Armstrong. ®
Categories: Linux fréttir

Cops arrest man suspected of being Dream Market kingpin

TheRegister - Thu, 2026-05-14 11:26
A man police suspected of being the administrator of the former leading online drug bazaar Dream Market is facing charges in both his native Germany and the US following his arrest earlier this month. Prosecutors claim Owe Martin Andresen, 49, is the individual known by the “Speedstepper” alias, one of the few Dream Market admins identified by law enforcement in the 2019 attempts to shutter the platform. While other crime leaders on the platform have been convicted, it took the authorities years to identify their latest suspect, whom they believe was main admin of the website. Authorities said they tracked him down by monitoring crypto wallets, and tracking purchases of gold bars that the indictment claims were delivered to his home address. Other lower-level admins have long been convicted, including French national Gal Vallerius, who was sentenced to 20 years in prison a year after being arrested at Atlanta airport in 2017 on his way to attend the World Beard and Mustache Championships (yes, really). Andresen was arrested by German police on May 7 after the US indicted him in January, charging him with several counts of money laundering offenses. He faces similar charges in Germany. Authorities spent years gathering small pieces of evidence that eventually tied Andresen to Dream Market’s helm. After the platform shut down in 2019 amid mounting pressure from law enforcement, none of the suspected admins touched Dream’s infrastructure, including the operation’s known cryptocurrency wallets, which contained millions of dollars’ worth of tokens. Three years later, between November and December 2022, Andresen allegedly accessed these numerous wallets and transferred the contents into a single, consolidated one - a step only someone with access to Dream’s private key could carry out. Police believe this was Speedstepper. The next breadcrumb came almost a year later, when in August 2023, Andresen allegedly used an Atlanta-based cryptocurrency service provider to purchase gold bars from various international companies using the funds from the consolidated wallet. The indictment claims he had those gold bars shipped directly to his house in Germany, instead of choosing a more neutral, less compromising location. Between then and April 2025, German police believe they have identified several other money laundering schemes executed by Andresen, washing more than $2 million in the process. Upon his arrest on May 7, police searched Andresen’s residence “and two other locations,” at which officers found gold bars worth approximately $1.7 million, more than $23,000 in cash, as well as several bank accounts and crypto wallets containing roughly a combined $1.2 million. All of these proceeds are thought to stem from the funds generated by Dream Market and the various fees it charged for transactions and sellers to list their illicit wares. Dream Market operated between 2013 and 2019 and benefited greatly from the Alphabay and Hansa seizures, scooping up their users after playing second fiddle to both platforms for much of their respective reigns. According to US Attorney Theodore Hertzberg, at its peak, Dream had around 100,000 concurrent listings, most of which were for drugs. The US said the market was responsible for the trafficking of huge quantities of illegal narcotics, including more than 90kg of heroin, 450kg of cocaine, 25kg of crack cocaine, 45kg of methamphetamine, 13kg of oxycodone, and 36kg of fentanyl. “Andresen allegedly channeled commissions earned from selling illegal drugs, stolen personally identifiable information, counterfeit identification documents, and other items through cryptocurrency wallets and even converted his ill-gotten gains into gold bars,” said US Attorney Hertzberg. “Thanks to the close coordination between federal and German law enforcement, Andresen and his co-conspirators will no longer profit from the online sales of narcotics and fraud services, and Andresen will be prosecuted in both Germany and the United States as a result of his actions.” Andresen faces 12 federal charges - six counts each of international and domestic concealment money laundering - each carrying a maximum 20-year sentence. German authorities also charged Andresen with “several” counts of domestic money laundering, with each charge carrying a maximum five-year prison stint. ®
Categories: Linux fréttir

UK government prescribes Single Patient Record for NHS data chaos

TheRegister - Thu, 2026-05-14 11:04
The UK government has confirmed plans for a Single Patient Record (SPR), a major overhaul of NHS health data management that could involve the service's controversial Palantir-run Federated Data Platform (FDP). In the King's Speech yesterday, the Labour government said it would push ahead with plans to introduce the NHS Modernisation Bill in the new Parliamentary year, which is set to include legislation for the introduction of the SPR. Previous governments have found their efforts to bring together electronic patient records held by family doctors, hospitals, and other specialist services beset by technical complexity, a mind-bending web of rules and roles, and some cultural intransigence. Nonetheless, the government said its plan for the SPR would allow the NHS to "bring together patients' health and social care records into one place to improve patient safety and experience." It said patients would be able to see their own health records securely on the NHS App. The plan is to roll out the service to those receiving maternity and frailty care by 2028, with wider implementation to follow. An impact statement for the policy, published in January, said costs would encompass product development, tech, and data integration including alignment with external vendors, delivery and administration such as business case development, engagement, clinical and system input, as well as commercial costs. "The broad scope of the SPR means it will require investment to ensure that staff such as paramedics and community pharmacists have the same access to their patients' data as those working in GP surgeries and hospitals," it said. "Depending on the approach to the SPR, in order to maximize its value, activities may need to include translating the medical terminology in care records into plain English so that they can be readily understood and used by the patient, and to digitize historic patient information." While the document says the SPR could support automated triage of patients, potentially reducing variation in the service, "there are risks to delivering the Single Patient Record due to the magnitude and complexity of the program and integration with legacy systems." The impact assessment said there was a risk of reliance on a single provider and "de-facto vendor-lock." "While many clinicians would support data sharing for the purposes of improving care, there may be a risk of clinical resistance to changes to data sharing if safeguards are perceived to be insufficient," the document said. Dr Emma Runswick, council deputy chair of doctors' union the BMA, said: "The NHS Modernisation Bill is a huge undertaking and doctors' and patients' past experience with large top-down reorganisations of the NHS have not always been a happy one. The announcement of a SPR is welcome, however it is crucial that GPs' voices are listened to in its implementation to ensure patient data remains safe and patient confidence is protected." Currently, GPs are official "controllers" of patient data under UK data protection law, although that may change with the introduction of the new SPR. NHS England is currently planning the SPR rollout. A meeting held by the soon-to-be-defunct quango last year "accepted that an appropriate data controller for SPR is necessary" and that change would require a review of the legislation. The minutes, obtained by campaign group medConfidential under the Freedom of Information Act, said: "Given SPR will be a multi-service record it would not be appropriate for GPs to act as the data controller. It was agreed that while the NHS will be the data controller/custodian, patients would expect to own their records: how this can be achieved requires further thought." In an official statement, BMA GP Committee England chair Dr Katie Bramall said: "GPC England has not been part of the discussions on what form the Single Patient Record will take, who will be granted access, the purposes for which it will be used, or which company will be contracted to operate it. "There are already existing mechanisms that allow those in secondary care to view the live GP record, and therefore, the Government needs to explain why an additional system is needed. Until the security of any data flows can be guaranteed, and full patient-facing audit trails are made available via the NHS App showing who has accessed confidential medical data and why, we remain concerned. "We also remind patients that they can exercise their right to opt out of secondary uses of their confidential medical data by visiting the NHS website." The NHS England Data and Digital Technology Committee also heard that the NHS was considering using existing electronic patient record (EPR) systems and/or a role for the controversial Federated Data Platform, run by US spy-tech firm Palantir, in building the SPR solution. Sam Smith, medConfidential coordinator, told The Register that the FDP/Palantir arrangement – which has been the focus of fierce criticism in Parliament recently – is likely to have a role either way. "Either there's going to be a new data store – which will be in Palantir – or there'll be infrastructure for bringing various APIs together, where you make a single call and you get back a summary of the patient's record. The system doing that will be the FDP. [NHS England] has not publicly decided what they're going to do, in practice. They'll probably do the API thing first, and if they don't get everything they wanted, they will eventually take a copy of the data." The government has backed its ambitions for NHS technology with a promised £10 billion in investment. But nationally led digital transformation in the NHS has failed in the past. The ambitious National Programme for IT (NPfIT), launched by the Blair Labour government in 2003, had a budget estimated at £12.7 billion ($17.2 billion). Although NPfIT introduced a number of new technologies, it fell short of introducing electronic health records throughout the NHS. The National Audit Office said it did not represent value for money, and in 2020 it warned there was a lack of systematic learning from past failures in NHS digital transformation. ®
Categories: Linux fréttir

Mystery Microsoft Bug Leaker Keeps the Zero-Days Coming

Slashdot - Thu, 2026-05-14 11:00
An anonymous researcher known as Nightmare-Eclipse, who has already leaked several Windows zero-days this year, has disclosed two more: YellowKey and GreenPlasma. The Register reports: Nightmare-Eclipse described YellowKey as "one of the most insane discoveries I ever found." They provided the files, which have to be loaded onto a USB drive, and if the attacker completes the key sequence correctly, they are granted unrestricted shell access to a BitLocker-protected machine. When it comes to claims like these, we usually exercise some caution, as this bug requires physical access to a Windows PC. However, seeing that BitLocker acts as Windows' last line of defense for stolen devices, bypassing the technology grants thieves the ability to access encrypted files. Rik Ferguson, VP of security intelligence at Forescout, said: "If [the researcher's claim] holds up, a stolen laptop stops being a hardware problem and becomes a breach notification." Despite the physical access requirement, Gavin Knapp, cyber threat intelligence principal lead at Bridewell, told The Register that YellowKey remains "a huge security problem for organizations using BitLocker." Citing information shared in cyber threat intelligence circles, he added that YellowKey can be mitigated by implementing a BitLocker PIN and a BIOS password lock. Nightmare-Eclipse hinted at YellowKey also acting as a backdoor, allegedly injected by Microsoft, although the people we spoke to said this was impossible to verify based on the information available. The researcher also published partial exploit code for GreenPlasma, rather than a fully formed proof of concept exploit (PoC). Ferguson noted attackers need to take the code provided by the researcher and figure out how to weaponize it themselves, which is no small task: in its current state it triggers a UAC consent prompt in default Windows configurations, meaning a silent exploit remains a work in progress. Knapp warned that these kinds of privilege escalation flaws are often used by attackers after they gain an initial foothold in a victim's system. "These elevation of privilege vulnerabilities are often weaponized during post-exploitation to enable threat actors to discover and harvest credentials and data, before moving laterally to other systems, prior to end goals such as data theft and/or ransomware deployment," he said. "Currently, there is no known mitigation for GreenPlasma. It will be important to patch when Microsoft addresses the issue." The other zero-days leaked include RedSun, a Windows Defender privilege escalation flaw; UnDefend, a Windows Defender denial-of-service bug; and BlueHammer, a separate Microsoft vulnerability tracked as CVE-2026-32201 that was patched in April. According to The Register, RedSun and UnDefend remained unfixed at the time of publication, and proof-of-concept code for the flaws was reportedly picked up quickly and abused in real-world attacks.

Read more of this story at Slashdot.

Categories: Linux fréttir

Dirty Frag gets a sequel as Fragnesia hands Linux attackers root-level access

TheRegister - Thu, 2026-05-14 10:01
Linux admins hoping Dirty Frag was a one-off horror from the kernel networking stack are about to have a considerably worse week. Researchers at Wiz have published an analysis of "Fragnesia," a Linux kernel local privilege escalation flaw discovered by William Bowling of the V12 security team that allows unprivileged users to gain root by corrupting page cache memory. The bug, tracked as CVE-2026-46300, has public proof-of-concept exploit code documented by V12 on GitHub that demonstrates the vulnerability being used against /usr/bin/su to spawn a root shell. According to Google-owned Wiz, the flaw sits in the Linux kernel's XFRM subsystem, specifically ESP-in-TCP processing tied to IPsec support. By carefully triggering the bug, attackers can modify protected file data in memory without changing the original files stored on disk. Wiz describes Fragnesia as part of the broader "Dirty Frag" bug family rather than a completely separate class of issue. Dirty Frag itself only surfaced days ago and was already attracting attention thanks to public exploit code, incomplete patch coverage, and unusually reliable privilege escalation. According to researcher Hyunwoo Kim, who uncovered Dirty Frag, "Fragnesia" emerged as an unintended side effect of patches shipped to fix the original Dirty Frag vulnerabilities, adding yet another entry to the long tradition of security fixes accidentally creating new security problems. As The Register previously reported, Dirty Frag followed hot on the heels of Copy Fail, another Linux kernel privilege escalation flaw that abused page cache handling to overwrite supposedly read-only files. Historically, local Linux privilege escalation bugs had a reputation for being unreliable, crash-prone, or fiddly enough that attackers needed good timing and a fair bit of luck to pull them off cleanly. Fragnesia looks different, as Wiz and V12 both say the exploit avoids race conditions entirely, making it far more predictable than older Linux root exploits like Dirty COW. That makes the bug much more useful after an initial compromise. An attacker who gains access to a system through phishing, stolen credentials, or a vulnerable cloud workload suddenly has a cleaner path to full root access. The V12 proof-of-concept repository is already public, while Linux vendors have started pushing out advisories and mitigation guidance. AlmaLinux warned that all supported releases are affected and urged administrators to patch quickly or disable unused ESP-related functionality where possible. Similar advisories have also been issued by Amazon Linux, CloudLinux, Debian, Gentoo, Red Hat Enterprise Linux, SUSE, and Ubuntu as distributors scramble to assess exposure across supported kernel versions. Microsoft also urged organizations to patch quickly, noting that though it had not observed in-the-wild exploitation so far, Fragnesia "can modify any file readable by the user, including [/]etc[/]passwd." The Linux networking stack is starting to look less like infrastructure and more like a root exploit vending machine. ®
Categories: Linux fréttir

Calling the cops just got extra AI as police seek to add tech to contact systems

TheRegister - Thu, 2026-05-14 09:15
Police forces across England, Wales and Northern Ireland will add personalization and artificial intelligence (AI) to their jointly run digital contact systems through a £72 million contract to manage and develop these. Almost all police forces in the three nations use the Digital Public Contact’s Single Online Home web platform for their own websites, with the platform also running Police.uk, a national information site, and Data.police.uk, which provides information on police-recorded crime. The Metropolitan Police Service (MPS), which hosts Digital Public Contact services on behalf of the National Police Chiefs Council, hopes to find a single supplier for these under a new contract running from July 2027 to December 2029, with a possible three-year extension, according to a market engagement procurement notice published on 12 May. Existing Digital Public Contact services include the Single Online Home websites, linked services that pass information on crimes and incidents from the public to relevant officers; and the National My Police Portal, a new service using GOV.UK’s One Login to links victims with officers in charge of cases, which South Yorkshire Police started using in January. The new contract will also cover use of AI. In March West Yorkshire Police and Digital Public Contact started using AI to extract material from old control room calls, which at present are normally recorded but not transcribed. In the procurement notice, the MPS said that AI could also be used in reporting, analysis, conversational interactions and staff assistance. In a speech on the development of Digital Public Contact last October, Cambridgeshire’s chief constable Simon Megicks said that the work also includes developing a natural language switchboard that can help direct incoming calls and live services to assist operators, which is being piloted by Humberside Police. “It supports call handlers in real time, and as they converse, the AI listens in and conducts live database searches, surfacing relevant information instantly,” he said of the assistance service at a National Police Chiefs Council innovation event. “Operators are empowered to make better decisions, quicker: reducing risk and improving outcomes for the public.” In the King’s Speech on 13 May the government confirmed plans to merge forces in England and Wales and establish a National Police Service. The procurement notice says that the new contract will provide “a robust foundation” supporting these structural changes, although they are likely to take place beyond the end of the contract. Following a market engagement event on 9 June, the MPS plans to publish a tender notice for the work around the end of July. ®
Categories: Linux fréttir

Bedrock and a hard place: Claude adventure leaves AWS user staring down $30K invoice

TheRegister - Thu, 2026-05-14 08:30
The world of AI is exciting, but there are plenty of expensive pitfalls ready to catch out the unwary, as one Register reader found when taking Anthropic's Claude Opus for a spin courtesy of Amazon Bedrock. Our reader managed to run up Bedrock charges totaling $30,141.33 in April 2026, despite using AWS Cost Anomaly Detection (CAD) to avoid any nasty surprises. Thirty-three days before our reader's first use of Bedrock, the threshold in CAD was set to "Absolute ≥ $100 AND Relative ≥ 40%" so alerts should have fired if things got too spendy. As for which services to monitor, our reader chose "AWS Services," which Amazon says "tracks all AWS services automatically." Except it apparently doesn't, at least not in the way our reader expected. The problem is that AWS Marketplace isn't supported by CAD, so costs incurred wouldn't trigger an alert. And how are Anthropic Claude models billed? Through the AWS Marketplace. After burning through our reader's AWS Activate credits (totaling $8,026.54 in this case), Amazon started charging for model inference on the Bedrock Marketplace, racking up $30,141.33, plus another $675.07 in AWS infrastructure charges, without a peep from the CAD service. "The credits masking made it worse," our reader told us. "AWS Activate credits did cover the first ~$8k of charges, which meant the Marketplace billing was silently working for weeks before the credits ran out. There was no notification when credits were exhausted – the charges simply started accumulating as invoiced amounts." The first warning that things were mounting up came in the form of a surprisingly large invoice. Corey Quinn, a cloud economist at the Duckbill Group and occasional contributor to this publication, told The Register: "It's unintuitive that Bedrock model spend is Marketplace unless you're entirely too familiar with AWS." Quinn told us he does most of his Claude inference directly with Anthropic to take advantage of the company's real-time billing, alerts, cutoffs, per-key limits, and so on. The approach has avoided some potentially expensive mistakes. As far as AWS is concerned, the lack of CAD support for AWS Marketplace charges makes it all too easy to run up a big bill without realizing it, particularly when it comes to AI usage. This could be regarded as a cautionary tale. If one digs deeply enough into the AWS documentation on CAD, there is a line that warns that AWS Marketplace is an unsupported service. However, it isn't clear that Claude on Bedrock is billed through the AWS Marketplace. The fact that Marketplace billing bypasses the monitoring tools compounds the issue, and could easily leave a customer getting an unpleasant surprise at invoice time. An AWS spokesperson told The Register: "AWS offers multiple tools to help customers manage spend, including AWS Budgets, which covers Amazon Bedrock spend on AWS Marketplace and other services. As noted in our documentation, AWS Marketplace charges are not currently supported by Cost Anomaly Detection. Customers with questions should reach out to AWS Support." ®
Categories: Linux fréttir

Physicists Find Possible Errors In 100-Year-Old Model of the Universe

Slashdot - Thu, 2026-05-14 07:00
A trio of preprint papers suggests the universe may not be perfectly uniform on the largest scales, finding tentative 2-to-4-sigma deviations from a core assumption of standard cosmology known as FLRW geometry. Live Science reports: The work combines observations of distant exploding stars and large-scale galaxy surveys to probe whether the universe truly follows a nearly 100-year-old mathematical framework known as Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmology. The analyses revealed mild-but-intriguing deviations from the predictions of the standard model. "We saw a surprising violation of an FLRW curvature consistency test, hinting at new physics beyond the standard model," study co-author Asta Heinesen, a physicist at the Niels Bohr Institute in Copenhagen and Queen Mary University in London, told Live Science via email, referring to the assumption that the space's curvature is the same everywhere. "This could potentially be due to various effects, but more research is needed to address the cause of the FLRW violation that we see empirically." [...] The analyses revealed small but potentially important departures from the predictions of standard FLRW cosmology. Depending on the dataset and analysis method, the discrepancy reached a statistical significance of about 2 to 4 sigma. In physics, sigma measures how likely a result is to arise purely by chance; a 5-sigma result is typically required before scientists claim a discovery, so the new findings remain tentative. Still, the results suggest that something unexpected may be affecting the geometry or expansion of the universe. "The main finding is that you can directly measure Dyer-Roeder and backreaction effects from available cosmological data, and clearly distinguish these effects from other alterations of the standard cosmological model, such as evolving dark energy and modified gravity theories," Heinesen said. "This was previously not possible in such a direct way, and this is what I think is the breakthrough in our work." "If these indicated deviations from an FLRW geometry are real, it would signify that most of the cosmological solutions considered for solving the cosmological tensions -- evolving or interacting dark energy, new types of matter or energy, modified gravity and related ideas within the FLRW framework -- are ruled out," the researchers wrote. The next step will involve applying the new theoretical framework to larger and more precise datasets. "It is to apply our theoretical results to data to test the standard model and to produce constraints on the Dyer-Roeder and backreaction effects," Heinesen said.

Read more of this story at Slashdot.

Categories: Linux fréttir

To gain root access at this company, all an intruder had to do was ask nicely

TheRegister - Thu, 2026-05-14 07:00
PWNED Welcome once again to PWNED, the column where we help you prepare for security success by studying others’ embarrassing failures. Today’s terrible tale involves individuals trying to do right by a company executive by letting their guard down, never a smart move. Have a story about someone leaving a gaping hole in their network? Share it with us at pwned@sitpub.com. Anonymity is available upon request. Our sad story comes from Brandon Dixon, who currently serves as CTO and co-founder of AI security firm Ent. In a prior life, however, Dixon was a penetration tester for hire and he saw some things that made all my remaining hairs stand on end just hearing about them. During one pentesting assignment, Dixon tried to find out how easy it would be to steal someone’s account using social engineering. The answer: barely an inconvenience. Dixon telephoned IT security and pretended that he was the head of security who had lost his password. When they asked him challenge questions, he said he had forgotten the answers to those also. Then he gave them the password he wanted to use over the phone and they did a reset for him. After that, he was able to get into the network and do whatever he wanted there. There’s so much that’s obviously wrong here that it’s hard to know where to begin with our lesson-taking. The IT support agents should not have taken Dixon’s word that he was the security manager, especially after he failed challenge questions, and should have denied his request to reset the password. They were probably thinking “this guy is an executive and we don’t want to piss him off” rather than “we have procedures that everyone must follow.” The other problem here is that the IT department entered Dixon’s suggested password for him over the phone. First of all, the IT department should have sent a password reset to the real employee’s email or phone number. Second of all, it’s piss-poor security for anyone to know a user’s password other than the user themselves. And I say this as someone who used to work for a company where, if you had a problem, the IT support people would ask for your password via chat. Dixon also shared another story about social engineering from a time when he consulted for a pharmaceutical company. Members of the competition would call sales and marketing reps, pretend they were coworkers, and then extract information about upcoming drugs. This would allow competitors to know what was coming and how to respond to it. To help solve the problem, Dixon instituted a system where real employees had to give a secret password at the beginning of a conversation. “I built a system called 'Chal-Resp,' short for 'challenge-response,' that generated work pairings so a user could validate they were speaking with an actual employee,” he told The Register. “The caller would need to say the word and the end-user would need to respond with the proper challenge; only employees had access.” What both of Dixon’s stories have in common is the proof that humans are eager to please and be helpful. But suspicion is the whole root of infosec, so it behooves us all to be a little less helpful to strangers in the workplace. ®
Categories: Linux fréttir

AI models are getting better at replacing cybersecurity pros on certain tasks

TheRegister - Thu, 2026-05-14 06:27
The UK AI Security Institute (AISI) has found that frontier models are quickly becoming more efficient when asked to do some cybersecurity work. AISI measures this with its "time window benchmark for cybersecurity," which estimates how much work an AI can do compared to a human. Using the benchmark could lead to findings such as Claude Sonnet 4.5 can do what a human cybersecurity expert can do in 16 minutes about 80 percent of the time, given a budget of 2.5m tokens. AISI has found the human-comparable task time – 16 minutes in this instance – is growing, fast. If tokens flowed freely instead of being arbitrarily capped, AI models might do better still. In February 2026, AISI internally reduced the expected task time doubling period from 8 to 4.7 months, based on progress made since late 2024. With the release of Anthropic Mythos Preview and OpenAI GPT-5.5, AISI has once again had to compress its projected doubling period. "In February 2026, we estimated that frontier models' 80 percent-reliability cyber time horizon had doubled every 4.7 months since reasoning models emerged in late 2024, given a 2.5M token limit," the AISI said in a post on Wednesday. "This was around half our November 2025 doubling time estimate, which was 8 months for both 50 percent and 80 percent reliability. Claude Mythos Preview and GPT-5.5 have since significantly outperformed this trend." The recalculated doubling time estimate, given what Mythos Preview and GPT-5.5 can do, is even shorter than 4.7 months. AISA does not cite a specific value but the organization points to similar time horizon estimates based on measurements of a broader skillset, software engineering, made by non-profit AI research house METR. "Their results imply a consistent doubling time of 4.2 months on software tasks since late 2024," AISI said, noting that with the latest Mythos Preview checkpoint (model update), it's closer to 4 months. Note that the time window benchmark is not a broad assessment of capabilities – AISI is not saying frontier models are becoming twice as capable by all measures. It's a narrow assessment based on the time it takes people to accomplish security tasks. Citing a different metric, AISI says the latest Mythos Preview checkpoint solved a 32-step simulated corporate network attack called "The Last Ones" in six of 10 attempts and managed to complete a previously unsolved challenge, a seven-step industrial control system attack called "Cooling Tower," in three of 10 attempts. As a point of comparison, when Opus 4.6 was evaluated in February 2026, it completed a maximum of 22 of 32 steps for The Last Ones. That model managed to reach milestone 6, which involves reverse-engineering a Windows service binary to access encrypted credentials, escalating privileges via token impersonation, and recovering a cryptographic key to access a command-and-control management service. "Frontier AI's autonomous cyber and software capability is advancing quickly: the length of cyber tasks that frontier models can complete autonomously has doubled on the order of months, not years," AISI concludes. "What this evidence does not tell us is how the pace of progress will evolve, when AI will reach any particular capability threshold, or how these capabilities will translate against defended, real-world systems." The curl project offers one data point with regard to the real world implications of the latest frontier models: Mythos managed to find just one confirmed vulnerability in its codebase. But watch this space. ®
Categories: Linux fréttir

Tencent admits GPUs only pay for themselves when powering personalized ads

TheRegister - Thu, 2026-05-14 04:40
Chinese web giant Tencent struggles to earn a return on investment from GPUs – unless it uses them to power its advertising business. “If we buy GPUs and we deploy them into our ad tech, then that's a relatively short-cycle investment,” said Chief Strategy Officer James Mitchell during the company’s Q1 2026 earnings call. “The GPUs yield better targeting, higher click-through rates and higher revenue and profit on a pretty accelerated basis,” he said. But the company views GPUs powering work on its Hunyuan foundation model as “important for our franchise.” Mitchell said Tencent is comfortable with this situation. “There's been many products within Tencent … that went through lengthy incubation periods where they had no return on investment, but we were confident in the franchise value creation,” he said. “And then over time, they had more lengthy harvesting periods where we've been able to drive very healthy returns on that sunk investment.” He predicted that AI will go through the same cycle But Tencent is struggling to make the wheel turn because it’s only had enough GPUs to power its own services, leaving its public cloud without enough accelerators to rent to customers. Mitchell said Chinese manufacturers will soon fill the gap. “As the supply of China design GPUs progressively ramps up, then we'll be remedying that situation,” he said. Chief financial officer Shek Hon Lo weighed in with an observation that two factors made it hard for Tencent to get all the GPUs it wants: US sanctions, and “limited fab capacity within China.” “That's now being addressed because the China designed ASICs are seeing more supply from fabs within China as well as more supply from fabs in neighboring countries,” he said. But Tencent still expects GPU procurement to be harder than buying CPUs, as Lo said the company has “very long-term” deals with CPU vendors. “We've been a big customer for Intel and AMD for many years,” he said. “We've been progressively growing our volume with them for many years, and they believe it will continue to progressively grow our volume for many years to come.” That remark will be cause for celebration at the US companies, which have watched other hyperscalers invest heavily in custom Arm silicon. Tencent posted another strong quarter, with revenue of RMB196.5 billion ($28.9 billion) representing 12 percent growth. The company’s Weixin and QQ messaging apps have 1.95 billion monthly combined users. Tencent has tweaked their mobile apps “to act as communication interfaces for controlling AI agents, allowing users to orchestrate agents from mobile for complicated task execution on PC and cloud.” Tencent’s Western rivals Google and Meta haven’t yet built similar apps. And they don’t experience the same hardware acquisition problems Tencent faces. ®
Categories: Linux fréttir

OpenAI Trial Wraps Up With 'Jackass' Trophy For Challenging Musk

Slashdot - Thu, 2026-05-14 04:30
After three weeks of testimony, the Musk v. Altman trial is nearing its end. OpenAI has rested its case, closing arguments are set for Thursday, and jury deliberations are expected to begin afterward. An anonymous reader quotes a report from Business Insider: Joshua Achiam, OpenAI's chief futurist, was probably the most memorable witness of the day. He told jurors about a companywide meeting where Musk answered questions about his planned departure from OpenAI in 2018. Musk told the crowd of 50 or 60 people that he was leaving OpenAI to start his own competing AI. He said he wanted to "build it very fast, because he was very worried that someone else, if they got it, would do the wrong thing with it," Achiam said. Achaim said he challenged Musk on the safety of this approach, which he called "unsafe and reckless." "How did Musk respond," OpenAI's lawyer Randall Jackson asked. "Defensively," Achiam said. "We had a pretty tense exchange, and he snapped and called me a jackass." In an effort to prove Achiam's story, OpenAI's lawyers brought a trophy to court that the futurist said he received after his heated exchange with Musk. On the witness stand, Achiam described the trophy as "a small golden jackass, inscribed with: 'never stop being a jackass for safety.'" He said his then-colleagues, Dario Amodei and David Luan, gave it to him as a thank-you for standing up to the Tesla CEO. Lead OpenAI attorney William Savitt told reporters after the day's session that Wednesday had been the first time he'd touched the statue. The futurist had to do without the visual aid, however. Judge Yvonne Gonzalez Rogers did not accept the trophy as evidence, so it did not appear before the jury. Musk and Altman have presented dueling experts on a question at the core of the trial -- was the nonprofit that runs OpenAI hurt or helped by its $13 billion partnership with Microsoft? Musk's expert testified last week that the partnership was indeed hurt, supporting the Tesla CEO's contention that in partnering with Microsoft, OpenAI betrayed the company's nonprofit origins and mission. But on Thursday, OpenAI's expert, John Coates, used Musk's expert's own pie chart and testimony against him. The partnership has "generated value for the nonprofit that I believe he himself accepted was in the $200 billion range in his own testimony," Coates said, referencing Musk expert Daniel Schizer. "If that's not faring well, I don't know what faring well is." In a scored point for Musk, the jury learned Thursday that Microsoft's own CTO once raised concerns about how OpenAI's early nonprofit donors, including LinkedIn cofounder Reid Hoffman, would react to a partnership. "I wonder if the big OpenAI donors are aware of these plans," Chief Technology Officer Kevin Scott said in a 2018 email he was asked to read aloud to jurors. In it, Scott said he doubted donors would appreciate OpenAI using their seed money to "go build a for-profit thing." Scott was being questioned by an OpenAI lawyer, who may have wanted jurors to quickly hear Scott's explanation: that he only had a "vague awareness" of what was happening at OpenAI at the time. Scott also told the jury he wasn't thinking about Musk when he made the remark. "Primarily, I was thinking about Reid Hoffman. He was the OpenAI donor I knew," Scott said, adding, "I wasn't thinking about anyone besides him." Recap: Sam Altman Testifies That Elon Musk Wanted Control of OpenAI (Day Ten) Microsoft CEO Satya Nadella Testifies In OpenAI Trial (Day Nine) Sam Altman Had a Bad Day In Court (Day Eight) Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (Day Seven) Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six) OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five) Musk Concludes Testimony At OpenAI Trial (Day Four) Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Read more of this story at Slashdot.

Categories: Linux fréttir

Cisco to fire 4,000 staff and generously give them free training – on Cisco

TheRegister - Thu, 2026-05-14 03:32
Cisco will make around five percent of staff redundant and has generously offered them free Cisco training for a year once they’re gone. CEO Chuck Robbins broke the news in a Wednesday blog post titled “Our Path Forward” that opens “Today we announced our Q3 FY26 earnings with record revenue of $15.8 billion, up 12 percent year over year, and double-digit top and bottom-line growth. The ELT [executive leadership team] and I could not be prouder of the growth you have all delivered for Cisco.” That growth included net income growing 35 percent to $3.4 billion. Yet Robbins’ pride was not sufficient for all Cisco staff to keep their jobs. The CEO said the layoffs are necessary because “The companies that will win in the AI era will be those with focus, urgency, and the discipline to continuously shift investment toward the areas where demand and long-term value creation are strongest.” For Cisco that means “reducing roles in some areas” and also “making clear, strategic investments – particularly in silicon, optics, security, and in our employees’ use of AI across the company.” On Thursday, US time, close to 4,000 unlucky Cisco staff will be shown the door. Robbins said Cisco will help its soon-to-be-former workers find their next gig, and that the company’s efforts to do so have a 75 percent success rate. “We are also committed to continued personalized learning and will provide one year of access to all Cisco U courses and certifications, covering AI, Security, Networking, and more,” he added. Cisco made two big rounds of layoffs in 2024, one of which ejected seven percent of staff and the other resulted in Cisco firing five percent of employees. The restructures appear not to have slowed the company down: Robbins said product orders in Q3 rose 35 percent year over year – a figure that encapsulates a 105 percent year-over-year surge in revenue from hyperscalers and more modest 18 percent growth from other buyers. Robbins said Cisco has already scored $5.3 billion of AI infrastructure sales this year, and forecast full-year sales of $9 billion – 4.5 times its haul from last year. More prosaic products, like Wi-Fi kit, also grew fast as sales rose 40 percent. The company hopes to keep that cash flowing by building wireless kit that uses less memory. “You’ll see products that’ll become orderable in Q4 that’ll actually require 50 percent less memory,” Robbins said, with the design work to make that possible an example of the “20-plus programs that we’ve put into place that are active to reduce the memory utilization across the portfolio.” Cisco’s doing that despite the rising price of memory and storage not putting a dent in its margins, an outcome that execs attributed to supply chain management efforts. Glasswing to lift security sales Later in the earnings call, Robbins revealed that Cisco is participating in Anthropic’s Project Glasswing and using the Mythos model to test its code. The CEO said another impact of Anthropic’s bug-finding AI will be to accelerate plans to replace security appliances once other vendor’s use of Mythos finds flaw that are hard to fix. “I actually think while there will be a security opportunity, there’s going to most likely be a lot of focus from our customers on modernizing their infrastructure so that they don’t have this risk from technology that just can’t be patched,” Robbins said. Robbins said Cisco may have won an order or two from customers who were already close to replacing old security kit “and Mythos pushed them over the edge.” But he said Cisco didn’t receive “any meaningful orders in Q3 as a result of Mythos, but that could change in the future as we continue to work with customers.” ®
Categories: Linux fréttir

Welcome to the vulnpocalypse, as vendors use AI to find bugs and patches multiply like rabbits

TheRegister - Wed, 2026-05-13 23:27
The vulnpocalypse has begun. Palo Alto Networks usually finds five vulnerabilities a month, but on Wednesday said it scanned its entire codecase using the latest frontier models, including Anthropic’s Mythos, and found 75 security holes, covered in 26 CVEs. This comes a day after Microsoft said it used its new agentic bug hunting system called MDASH to find 17 vulnerabilities across its products - on a record-setting Patch Tuesday that saw Redmond disclose a whopping 30 critical CVEs. Plus, last week Mozilla said it fixed 423 Firefox bugs in April, which is more than five times higher than the 76 fixes issued in March and almost 20 times higher than its 21.5 monthly average last year. The browser maker previously said Mythos found 271 flaws in Firefox 150. It shouldn’t be all that shocking. Security vendors have long warned about attackers using AI, and how this means defenders need to operate at AI speed to protect their own networks and systems (aka buying their AI-infused products). Now that models have become really good at finding bugs in code, security shops are using AI to scan their own software, hopefully to uncover and fix flaws before the baddies do. And this trickles down to two things: more patches, and more work for admins. Zero Day Initiative’s chief vuln finder Dustin Childs agrees with this assessment. “At first, yes, this means more patches and thus more work for admins,” he told The Register. “The goal over time would be to eliminate as many as possible, and, over time, that monthly number goes down.” What will make this whole AI bug hunting season “really painful,” he continued, is if the patches don’t work or - worse yet - break things. “Many customers don’t trust patches as it is, so if AI-related patches break things, they are less likely to apply as time goes on,” Childs added. “This will be true even if AI only finds the bugs and doesn’t make the patches.” Bug hunting on steroids This isn’t to say security companies should avoid AI to find and fix flaws. “All vendors should use what tools they have to find and remediate bugs before they are exploited in the wild,” Childs said. “Ideally, they would find the bugs before they even ship, but I’m not holding my breath for that to happen.” Both Microsoft and Palo Alto Networks (PAN) are part of Anthropic’s Project Glasswing, which means they are among the select group of entities allowed to test Mythos, the much-hyped LLM, to find security holes in their own products. Palo Alto Networks began testing Mythos on April 7, and has since continued using the LLM and other frontier models, including Claude Opus 4.7 and OpenAI’s GPT-5.5-Cyber, according to product manager Lee Klarich. “Today, we released our May ‘Patch Wednesday’ security advisories,” Klarich said in a Wednesday blog, adding that “this is the first time where the majority of findings were the result of frontier AI models scanning our code.” The LLMs scanned over 130 Palo Alto Networks products and platforms platforms, and as noted above found 75 issues, covered in 26 CVEs. None of these bugs are under exploitation, and as of Wednesday the company has fixed all bugs in its SaaS-delivered products and coded patches for all customer-operated products. Maybe 5 months before 'AI-driven exploits the new norm' “We intend to fix every vulnerability we find before advanced AI capabilities become widely available to adversaries,” Klarich said in his blog, adding that his company expects “a narrow three-to-five-month window for organizations to outpace the adversary before AI-driven exploits start to become the new norm.” A day earlier, Microsoft said its new multi-model agentic scanning harness (codename MDASH) helped researchers find 16 new vulnerabilities across the Windows networking and authentication stack, as disclosed in May’s Patch Tuesday event. This included four critical remote code execution flaws in components such as the Windows kernel TCP/IP stack and the IKEv2 service. “Unlike single-model approaches, the harness orchestrates more than 100 specialized AI agents across an ensemble of frontier and distilled models to discover, debate, and prove exploitable bugs end-to-end,” Microsoft VP of agentic security Taesoo Kim said in a Tuesday blog. Tom Gallagher, VP of engineering at Microsoft Security Response Center, admitted that “this month's release sits on the larger side of a hotpatch month.” Gallagher said he expects AI-assisted bug hunting to increase Patch Tuesday releases as both Microsoft and third-party researchers use these tools to boost vulnerability discovery. And yes, all of this ultimately means more patches and more work. More patches = more work “Finding bugs has always been the cheap end of the pipeline,” Luta CEO Katie Moussouris told The Register. “Triage, disclosure, building patches that do not break production, and getting customers to deploy them is the expensive end, and nobody has funded it for this volume.” Moussouris helped convince Redmond's top brass that Microsoft needed a bug bounty program in 2013, and three years later started her own bug bounty consultancy. She noted Palo Alto Networks’ staggering jump in CVEs this month. “Multiply that across every vendor and the bottleneck becomes admins and vulnerability management teams,” Moussouris said. And she also stressed that people should be using these new models to find vulnerabilities. “It is exactly what defenders should be doing,” Moussouris said. “Both PAN and Microsoft landed on the same answer: no single model catches everything. PAN ran Claude Mythos, Claude Opus 4.7, and GPT-5.5-Cyber because each finds bugs the others miss,” she added. “Microsoft orchestrates over 100 specialized agents across multiple models. Add threat intel and codebase context, and Microsoft rediscovered 96 percent of five years of confirmed bugs in a critical Windows component. The asymmetry is temporary, PAN puts adversary parity at three to five months, so any vendor not scanning their own code now is letting someone else find their bugs first.”®
Categories: Linux fréttir

Man Who Stole Beyonce's Hard Drives Gets Five-Year Sentence

Slashdot - Wed, 2026-05-13 23:00
A man accused of stealing hard drives containing unreleased Beyonce music, tour plans, and other materials from a rental car in Atlanta has pleaded guilty and accepted a five-year sentence, including two years in custody. Slashdot Bruce66423 shares a report from The Guardian: Kelvin Evans was by the Atlanta police department in September in connection to a July 2025 car robbery where two suitcases containing Beyonce music and tour plans were stolen from a rental car. [...] According to a July police report, Beyonce choreographer Christopher Grant and dancer Diandre Blue called 911 to report a theft from their rental vehicle, a 2024 Jeep Wagoneer, before Beyonce's Cowboy Carter tour dates in Atlanta. An October indictment stated that Evans entered the car on July 8 "with the intent to commit theft." The stolen hard drives contained "watermarked music, some unreleased music, footage plans for the show and past and future set list," according to a police report. Clothing, designer sunglasses, laptops and AirPods headphones were also stolen, Grant and Blue said. Local law enforcement searched for the location of one of the stolen laptops and the AirPods to try and locate the property. One police officer wrote in the report: "I conducted a suspicious stop in the area, due to the information that was relayed to me. There were several cars in the area also that the AirPods were pinging to in that area also. After further investigation, a silver [redacted], which had traveled into zone 5 was moving at the same time as the tracking on the AirPods." Evans was arrested several weeks after Grant and Blue filed a report, and was publicly named as the suspect in September. He was released on a $20,000 bond a month later. At the time of his arrest, Atlanta police said that the stolen property had not been recovered. It is unclear whether it has since been found. Bruce66423 commented: "Just for stealing a couple of suitcases from a car. Funny how the elite punish those who inconvenience them. Can you imagine an ordinary victim see their offender get that sort of sentence?"

Read more of this story at Slashdot.

Categories: Linux fréttir

AWS to Quick admins: The access control didn't work, but you weren't using it anyway, so what's the problem?

TheRegister - Wed, 2026-05-13 22:56
Most users put up with AWS the way you put up with the DMV. I say this with love, but it's hard to disagree that the UI is awful. The console is a UX time capsule if time capsules weren't allowed to ever look like other time capsules. The pricing pages were designed by someone who hates you personally, and you accept all of it because the one thing AWS has historically gotten right is the boring, important stuff. The security model. The IAM language no one likes, but everyone trusts. The boundary between your account and someone else's. Get that wrong, and the whole bargain collapses. So when Fog Security disclosed an authorization bypass in Amazon Quick on May 12 (that's the BI service formerly known as QuickSight, briefly known as Quick Suite, and now apparently just Quick, but check back next week) and AWS responded with a statement claiming "no customer data was at risk," it's fair to ask which definition of customer data they're using. Because it isn't an obvious one, and it certainly isn't mine. What Fog found Fog reports that when an Amazon Quick administrator (which is an absolutely devastating personal insult) uses "custom permissions" to explicitly deny access to AI Chat Agents, the UI correctly hides the feature. Great! Awesome! I sure wish to hell I could do that with S3 buckets to which I do not have access! Notably, there's no other way for an admin to do this - it's custom permissions or naught. The API, however, was perfectly willing to keep answering chat requests for any user in the account who knew how to send them. Fog's proof-of-concept was a non-admin asking the agent "Tell me about mangoes" from a session that was, on paper, locked out of the agent entirely. The agent told them about mangoes. AWS deployed the fix between March 11 and March 12, eight days after Fog reported it via HackerOne. So far, so coordinated. Seriously, for a company of this scale, that's underpants-outside-the-pants superhero speed. Good for you; gold star. What came next Where this gets uncomfortable is the response. AWS classified the severity as "none." It issued no customer notification. It published no advisory. After Fog disclosed the HackerOne report and published a blog post, AWS provided a statement to Fog Security reading, in full: "We appreciate Fog Security's coordinated disclosure. This issue was addressed in March 2026. No customer data was at risk and there is no customer action required. As always, customers can contact AWS Support with any questions or concerns about the security of their account." Take that sentence apart and see how much work "no customer data was at risk" is doing. Amazon Quick is described on its own product page as an AI assistant that "connects Slack, Microsoft Teams and Outlook, CRMs, databases, and documents in one place" and "grounds every answer in your real business data." The default chat agent, which is automatically and annoyingly provisioned the instant Quick is enabled whether the customer wants those AI features or not, is the front end for that data. It is the whole point of the front end for that data. Now consider the actual scenario AWS just patched. An administrator at, say, a regulated bank (an unregulated bank is called "a criminal enterprise that hasn't been caught yet") configures custom permissions denying chat agent access to a large group of users. Maybe those users are contractors. Maybe they're in a business unit that isn't cleared for AI tools. Maybe the bank's compliance posture flat-out prohibits shadow AI usage on top of internal data. Until two months ago, every one of those users could send an HTTP request directly to the agent endpoint and get a response. Fog asked about mangoes because they're a security firm doing a clean disclosure, not a malicious insider. A malicious insider would not have asked about mangoes. The question to AWS, with no rhetoric attached: In what sense was customer data not at risk? Either the chat agent doesn't actually have access to the data the product page says it does (in which case the marketing department has some serious splainin' to do) or unauthorized users could query an agent wired into customer data, in which case "customer data was at risk" is the correct English-language description of the situation. AWS clarifies, and says the quiet part out loud After this story started circulating, AWS offered a follow-up comment that I sincerely appreciate, because it's so much more honest than the first one. Per a hounded-looking AWS spokesperson: "The researcher was using the Admin Control capability that no customers were actively using when the server side validation was not present." Reading that twice doesn't help. Let me translate. AWS is saying: Yes, the server-side authorization check was missing. Yes, an authenticated user in your Quick account could bypass the only access control mechanism the service offers. The reason this is fine, apparently, is that no real customer had bothered to configure that access control during the window when it didn't work. Um ... what? The defense isn't "the bug wasn't real," which you could be forgiven for hearing in AWS's first statement. The defense also isn't "the bug couldn't have done what Fog says it could have done," which is the even stronger implication of their first statement. The defense is "the access control didn't enforce what we said it did, but luckily nobody was relying on it." This is the corporate-comms equivalent of "the lock on the front door didn't work, but nobody had locked it anyway, so why are you upset?" It's also a surprisingly specific telemetry claim. AWS is asserting that they know zero customers had configured custom permissions to deny chat agent access during the exposure window. That's a confident thing to say, and an even more interesting thing to volunteer as a defense, because it doubles as a withering review of Quick's access management model: the only knob the service provides for this purpose, the one AWS's own documentation explicitly tells administrators to use, has zero recorded uptake. The same follow-up also pointed back to the HackerOne thread to demonstrate that AWS told Fog throughout the disclosure window that "user-based authorization remained enforced." Translation: you needed authenticated credentials in the same Quick account to exploit this. Yes. That's intra-account scope, which Fog documented in their writeup, and which is precisely the scope in which custom permissions are supposed to function as a security boundary. AWS saying "user-based authorization was fine" is saying "you couldn't exploit this anonymously from the internet," which was never the threat model in question. The threat model is the contractor with valid SSO credentials whose admin tried to lock them out of some datasets. Why this matters more than it sounds Amazon Quick's access model is already an outlier: IAM policies don't govern Quick's AI Chat Agent, SCPs don't apply, and RCPs don't apply. Custom permissions are the only knob the service provides. If those don't enforce, nothing else does. And per AWS's own follow-up, literally nobody was using them anyway. Both halves of that sentence should be alarming, and AWS is offering them as reassurance. AWS's competitive moat for the last decade hasn't been pricing. It sure as poop hasn't been developer experience, documentation, console design, or the inscrutable poetry of service names. It's been the well-earned belief that AWS gets the foundational things right: boundaries, identity, durability, reliability, and the parts customers can't easily verify themselves. Customers have paid the AWS premium because they trusted the boring stuff. This year that trust is being tested in a way it hasn't been before. The 2025–2026 cadence of AWS security advisories has noticeably increased, for reasons that are as yet unclear. Coordinated disclosures from independent researchers keep surfacing missing authorization checks in newer, AI-adjacent services. The fixes are landing fast, which is good. The customer communication isn't landing at all, which is, charitably, a choice. A "severity: none" rating on a bypass of the only access control a service offers is not an objective security finding so much as it is a communication decision. And the communication decision now reads, with the benefit of AWS's follow-up: "We'll fix the bug, we won't tell you it existed, and if you ask we'll explain that you weren't using the feature anyway." AWS gets a lot of forgiveness on the small stuff because they own the big stuff. They might want to reconsider how much of the big stuff they keep classifying as "none." ®
Categories: Linux fréttir

Google's AI-enabled mouse pointer understands 'this' and 'that'

TheRegister - Wed, 2026-05-13 22:19
Google doesn't design mouse traps, so it's trying to design a better mouse. Google DeepMind announced a research effort to transform the standard computer mouse cursor into a context-aware, AI-powered tool, marking what the company described as the first major rethinking of the cursor in more than 50 years. The project by researchers Adrien Baranes and Rob Marchant integrated Google's Gemini AI model with an experimental context-aware mouse pointer. In this way, the company said, the system can understand where a user clicks, what they are clicking on, and the likely intent behind the interaction. Researchers said there is a persistent friction in how people currently interact with AI tools. Most AI assistants today live in a separate window, requiring users to copy, paste, or drag content into a chat interface before receiving help. The new approach aims to reverse that dynamic. "We want the opposite: intuitive AI that meets users across all the tools they use, without interrupting their flow," the researchers stated in the blog post. The mouse pointer works alongside the computer’s microphone, allowing Gemini to listen as the user points. This lets users refer to features on the screen with object pronouns like “this” and “that.” In a demonstration website, a user can hover a cursor over a crab and say “move this here,” and the system understands enough context to grab the crab and move it to where the cursor indicates. The first computer mouse, a one-button prototype with metal wheels for the x- and y-axis, was built out of wood in 1964 and was patented in 1970 by its inventors Doug Engelbart and Bill English, who worked at the Stanford Research Institute. Engelbart foresaw a day when humans and computers would interact more easily and naturally, which he talked about during his 1997 acceptance speech for the Lemelson-MIT Prize. “The computer technology, the digital capabilities, it’s affecting communications, displays, storage, computer processing. It’s affecting the way you can interface to things a lot more flexibly,” he said. “That’s going to be so pervasively high-impact in our society and our organizations that it's more than anything we’ve had to cope with evolutionary wise.” Maintain the flow At Google, the team said it laid out four design principles guiding the project. The first, which the researchers called "Maintain the flow," stated that AI capabilities should work across all applications rather than forcing users into separate AI-specific environments. Under this principle, a user could point at a PDF and request a summary, or hover over a statistics table and ask for a chart, all without leaving the current application. The next, "Show and tell," addressed the burden of prompt writing. The researchers stated that an AI-enabled pointer could capture visual and semantic context from the screen, reducing the need for users to write detailed text instructions to the model. They also developed the AI cursor based on how humans naturally communicate using short phrases and gestures like “this” and “that.” The researchers stated that the system would allow users to issue commands like "Fix this" or "Move that here" while the AI fills in the contextual gaps. The fourth principle, "Turn pixels into actionable entities," lets the pointer recognize structured objects within on-screen content. The researchers stated that this capability could turn a photo of a handwritten note into an interactive to-do list, or convert a paused video frame showing a restaurant into a booking link. In the blog, the researchers said that Google DeepMind has already begun integrating the lessons learned into products. A feature called Magic Pointer will soon roll out on the forthcoming Googlebook laptop platform, which The Chocolate Factory introduced earlier this week. The company said the technology will also allow users of Gemini in Chrome to point at specific parts of a webpage and ask questions, rather than composing a full text prompt. Experimental demos of the AI-enabled pointer are currently available through Google AI Studio, where users can test image-editing and map-based interactions using the point-and-speak approach. The company said it plans to continue testing the concept across additional platforms, including Google Labs' Disco. ®
Categories: Linux fréttir

SOLAI Launches $399 Solode Neo Linux AI Computer

Slashdot - Wed, 2026-05-13 22:00
BrianFagioli writes: SOLAI has launched the Solode Neo, a $399 Linux-based mini PC designed for always-on AI agents, browser automation, and persistent developer workflows. The compact system ships with an Intel N150 processor, 12GB LPDDR5 memory, 128GB SSD storage, Gigabit Ethernet, WiFi, Bluetooth, and a Linux-based operating system called Solode AI OS. The company says the device supports frameworks and tools including Claude Code, OpenAI Codex, Gemini CLI, and Hermes, while emphasizing local control, automation, and privacy-focused workflows running directly from a home network. While SOLAI markets the Solode Neo as an "AI computer," the hardware itself appears aimed more at lightweight automation and cloud-assisted agent tasks than heavy local inference. The low-power Intel N150 should be sufficient for browser automation, scheduling, monitoring, containers, and smaller AI workloads, but the system is unlikely to compete with higher-end local AI hardware designed for running larger models offline. Even so, the idea of a dedicated low-power Linux appliance for persistent AI and automation tasks may appeal to homelab users and self-hosting enthusiasts looking for a simpler alternative to building their own always-on workflow box from scratch.

Read more of this story at Slashdot.

Categories: Linux fréttir

Software Developers Say AI Is Rotting Their Brains

Slashdot - Wed, 2026-05-13 21:00
An anonymous reader quotes a report from 404 Media: On Reddit, Hacker News and other places where people in software development talk to each other, more and more people are becoming disillusioned with the promise of code generated by large language models. Developers talk not just about how the AI output is often flawed, but that using AI to get the job done is often a more time consuming, harder, and more frustrating experience because they have to go through the output and fix its mistakes. More concerning, developers who use AI at work report that they feel like they are de-skilling themselves and losing their ability to do their jobs as well as they used to. "We're being told to use [AI] agents for broad changes across our codebase. There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same," a UX designer at a midsized tech company told me. 404 Media granted all the developers we talked to for this story anonymity because they signed non-disclosure agreements or because they fear retribution from their employers. "We're building a rat's nest of tech debt that will be impossible to untangle when these models become prohibitively expensive (any minute now...)." "I had some issues where I forgot how to implement a Laravel API and it scared the shit out of me. I went to university for this, I've been a software engineer for many years now and it feels like I am back before I ever wrote a single line of code," the software developer at a small web design firm told 404 Media. "It's making me dumber for sure," the fintech software developer added. "It's like when we got cellphones and stopped remembering phone numbers, but it's grown to me mentally outsourcing 'thinking' in general. I feel my critical thinking and ability to sit and reason about a problem or a design has degraded because the all-knowing-dalai-llama is just a question away from giving me his take. And supposedly I tell myself ill just use it for inspiration but it ends up being my only thought. It gives you the illusion of productivity and expertise but at the end of the day you are more divorced from the output you submit than before." A software engineer at the FAANG said: "When I was using it for code generation, I found myself having a lot of trouble building and maintaining a mental model of the code I was working with. Another aspect is that I joined late last year and [the company's] codebase is massive. As a new hire, part of my job is to learn how to navigate the codebase and use the established conventions, but I think the AI push really hampered my ability to do that."

Read more of this story at Slashdot.

Categories: Linux fréttir

Datacenters are having fewer, but bigger failures

TheRegister - Wed, 2026-05-13 20:48
There's good news and bad news when it comes to datacenter uptime. According to a recent report from the Uptime Institute, bit barns have actually gotten more resilient over the past five years. However, the report suggests that those datacenter failures that do occur are lasting longer and costing more to resolve. According to Uptime, half of the operators surveyed reported an impactful or serious outage in the past three years. “This is the lowest level recorded since 2020 and continues a multi-year trend of improving reliability.” However, the report also finds that datacenter operators may be having a harder time adding additional 9s of reliability to their SLAs. According to Uptime, failure rates are falling at a slower pace, suggesting that existing efforts to improve resiliency may be at the point of diminishing returns. This doesn’t appear to be the result of complacency. Instead, analysts suggest that efforts to improve uptime are being offset by greater system complexity and more challenging operating environments caused by the widespread deployment of power dense infrastructure used in AI training and inference. “Higher rack densities, load variability, and operating closer to available power limits may increase the likelihood of cascading failures,” Uptime warns. Shortages of critical physical infrastructure like generators, switchgear, transformers, and other power and cooling systems have driven some operators to adopt second-hand or unproven hardware. “This is believed to have contributed to several failures and incidents at some datacenters,” the report reads. Power-related failures remain the leading cause of major datacenter disruptions, but even this is improving. “While power issues accounted for 45 percent of respondents’ most impactful outages in 2025, this is down from 54 percent in 2024,” the analysts write. However, the analysts also warn that this could change as local grids are stressed by ever larger datacenter deployments. While Uptime doesn’t expect grid power failure to be a primary cause of outages going forward, grid failures can still affect the availability of onsite power. During an outage, datacenters have a limited window to switch over to onsite generators, which can and do fail. Overburdened grids aren’t the only external factors on Uptime’s radar. The industry watchers note that many public outages have been linked to fiber cuts and other networking disruptions. “Digital infrastructure is becoming more distributed with outages originating outside the datacenter, including those tied to power availability, network connectivity or the reliance on external cloud services playing a larger role,” Uptime Analyst Andy Lawrence said in a statement. According to the report, networking-related issues remain the most frequently cited cause for IT disruptions. Even if the datacenter itself doesn’t fail, a bad network configuration can still result in service outages. The good news is that wide adoption of software-defined networking and automated traffic rerouting has helped mitigate this risk. The report found that 20 percent of those surveyed reported having no IT service outages in the past three years, an improvement of nine points from 2024. Software-level resiliency is helping to mitigate localized disruptions, like a fiber cut, by distributing the workload across multiple sites. However, this software resiliency comes with its own challenges, most notably complexity. As we saw with the drone strikes on Amazon’s UAE and Bahrain datacenters, spreading your workloads out across multiple availability zones doesn’t do much good if the failure spreads to multiple sites. While Uptime observed fewer outages in 2025, the report suggests outages may be lasting longer. "While a majority of publicly reported incidents are still resolved within 12 hours (55 percent), the share lasting more than 48 hours has increased for the second consecutive year." As we mentioned earlier, many of these were tied to factors like damaged fiber lines, which Uptime notes occurred more than twice as often as usual. As you might expect, the longer the outage, the more costly it can be, particularly when it concerns highly leveraged AI infrastructure. Uptime reports that one in five outages now exceeds $1 million in total costs, and expects that figure to continue to rise in the coming years. ®
Categories: Linux fréttir

Pages

Subscribe to www.netserv.is aggregator - Linux fréttir