news aggregator

Why Is the US Job Market So Tough, Especially for Recent College Grads?

Slashdot - 1 hour 52 min ago
What's going on with the U.S. job market? "The economy is growing. Unemployment is low," notes the Washington Post. "And yet, for millions of workers, finding a job has become harder than at almost any other point in decades," with the hiring rate "well below pre-pandemic levels for more than a year." Part of the problem? "Of the net 369,000 positions added across the entire economy since the start of 2025, health care alone accounted for nearly 800,000 — meaning every other sector, taken together, shed jobs." By the end of 2025 nearly half of college graduates ages 22 to 27 were working at jobs that didn't require a degree, according to stats from New York's Federal Reserve Bank. The headline unemployment rate, at 4.2%, looks healthy. But that figure has been buoyed by a shrinking labor force: Fewer people are actively looking for work, which keeps the rate down even as hiring slows... [Some large tech companies] are trying to recalibrate after their hiring sprees of 2021 and 2022, when many had raised pay, offered flexible schedules and signed people quickly... Higher interest rates have also made expansion more expensive, pushing many firms to invest in technology rather than headcount. Another reason hiring has slowed is uncertainty about AI. Even though the technology has not yet replaced large numbers of workers, it is already shaping how companies think about hiring. "I don't think this is AI displacement," said Ben Zweig, chief executive of Revelio Labs, a workforce data company. "What we're seeing is anticipatory." Instead of rushing to bring on new workers, some firms are waiting to see how the technology evolves and which tasks it will eventually take over. A 39-year-old web developer tells the Post it took 453 job applications to get a handful of interviews and two offers. And a journalism school graduate said they'd sent hundreds of job applications but most led nowhere, and they're now couch-surfing to save money. But the problem seems even worse for young people. One 18-year-old told the Post that in a year and a half of job searching, they'd yet to even meet an employer in person. The unemployment rate for people ages 22 to 27 who recently completed college hit 5.6% in the final months of 2025 — well above the 4.2% rate for all workers, according to national data from the Federal Reserve Bank of New York... At one point last summer, new workforce entrants made up a larger share of the unemployed than at any point since the late 1980s — higher even than during the Great Recession. When hiring slows, the door closes first on those without an existing foothold. For the class of 2026, the timing could hardly be worse. "It is getting increasingly clear that young people are being more affected by AI than older workers," Zweig said. Companies are not eliminating jobs at scale, but many are slow to hire junior workers. At the same time, older workers are staying in the labor force longer, leaving fewer openings for new arrivals. Even when jobs are available, the bar has shifted. Positions once considered entry level now often require several years of experience, technical expertise and familiarity with AI tools. With fewer openings and more applicants, companies are holding out for candidates who can do the job immediately and need little training... Employers are also looking for a different mix of skills. An analysis of millions of job postings by Indeed found that communication skills now appear in nearly 42% of all listings, while leadership skills feature in nearly a third — capabilities that are harder to prove on a résumé and harder still to demonstrate without an existing professional network. Christine Beck, a career coach who works with early-career job seekers, said employers are asking more of the people they do hire.

Read more of this story at Slashdot.

Categories: Linux fréttir

Cloud-managed earbuds sound strange - as a concept, and on a plane

TheRegister - 1 hour 56 min ago
Last year, The Register spotted Dell selling cloud-manageable wireless earbuds that feature the company’s famously stoic styling at a price higher than Apple charges for its latest AirPods. Dell eventually offered your correspondent a pair of the Pro Plus Earbuds to try so we could hear what all the fuss is about – and we accepted, on condition that the company showed us the cloudy management tools that make the buds worth the big bucks. Divya Soni, a go to market lead, showed me Dell’s cloudy Device Management Console, a tool that lets admins enroll and track the buds, send them new firmware, or do things like turn on active noise cancellation by default across a fleet of earbuds. New firmware matters for earbuds because they’re Bluetooth devices and the wireless protocol has had its fair share of security scares over the years. The buds have already earned Microsoft’s Teams Open Office Certification – a seal of approval for being able to handle noisy offices, plus a Zoom accreditation. New firmware might help there, too. Soni admitted earbuds aren’t the main priority for the Device Management Console, which Dell expects customers will mostly use to manage docks and displays. Dell delivers firmware updates to those devices at least once a year, to address security issues or fix bugs. The tool can do the same for keyboards or headsets. I can’t imagine anyone would adopt Dell’s Device Manager just to keep an eye on earbuds. I’m also not sure anyone would buy the buds for personal use. I say that because I own two sets of wireless earbuds and in their own way both are better than the Dells. My go-to buds are JB’s $40 Vibe Beam 2, which fit brilliantly, bring out some nice nuances in much music, boast batteries that last about six hours and only need about 15 minutes to recharge. That makes them satisfactory for long-haul flights, during which they drop a warmly enveloping cone of silence when active noise cancelling kicks in. My other pair are $100 Soundcore Space A40s (bought after destroying another pair). These buds have even nicer noise cancelling powers but fit terribly: I recently endured quite the scene when running to catch a bus and one dropped out of my ear and bounced into a shrub. The Soundcores redeem themselves with impressive microphones, so I use them when Zooming or recording a podcast. I prefer them to stay home because the case is bulbous and a little conspicuous in a front jeans pocket. The Dells are even bigger. They fit my ears well and battery life is strong at around eight hours. Active noise cancelling is poor: A high hiss persists in-flight and I perceived distracting artefacts when using them in noisy environments on the ground. Neither of my two PCs made a Bluetooth connection with the Dell buds. Dell has a fix for that – the buds’ case houses a small USB-C dongle devoted to connecting with the buds. It works every time and delivers a more stable connection than Bluetooth and brings out some musical nuances that I can’t hear with my other buds or desktop speaker. The dongle feels like a clue about how Dell imagines these buds will be used, because today's laptops seldom offer more than a pair of USB-C ports and they’re commonly used for power in and video out. Dedicating a port to earbuds seems wasteful … unless you’re using a Dell dock or monitor that offers more ports. The USB-C audio connector therefore made it hard to escape the idea that Dell expects these buds will almost always be sold as part of a corporate peripheral purchase. I can’t imagine consumers would prefer them to Apple’s AirPods, or the many cheaper earbuds that match them for performance. But if the boss decides your organization must have cloud-manageable earbuds it would be churlish to turn down the chance to use a pair of Pro Plus Earbuds for work and play. The experience of using them is in the name: they're built for the office but can handle after hours activities. They’re not delightful, but they’re far from trashy, annoying, or inconvenient. And when I inevitably lose or destroy my current buds I’ll be very happy if I have the Dells on hand. ®
Categories: Linux fréttir

Linux Kernel Outlines What Qualifies As A Security Bug, Responsible AI Use

Slashdot - 5 hours 26 min ago
The Linux 7.1 kernel has added new documentation clarifying what qualifies as a security bug and how AI-assisted vulnerability reports should be handled. Phoronix reports: Stemming from the recent influx of security bugs to the Linux kernel as well as an uptick in bug and security reports from discoveries made in full or in part with AI, additional documentation was warranted. Longtime Linux developer Willy Tarreau took to authoring the additional documentation around kernel bugs. To summarize (since the documentation is a bit too lengthy for a Slashdot story), the AI-assisted vulnerability reports should "be treated as public" because such findings "systematically surface simultaneously across multiple researchers, often on the same day." It adds that reporters should avoid posting a reproducer openly, instead "just mention that one is available" and provide it privately if maintainers request it. The guidance also tells AI-assisted reporters to keep submissions concise and plain-text, focus on verifiable impact rather than speculative consequences, include a thoroughly tested reproducer, and, where possible, propose and test a fix. As for what qualifies as a security bug, the documentation says the private security list is for "urgent bugs that grant an attacker a capability they are not supposed to have on a correctly configured production system" and are easy to exploit, creating an imminent threat to many users. Reporters are told to consider whether the issue "actually crosses a trust boundary," since many bugs submitted privately are really ordinary defects that belong in the normal public reporting process. All the new documentation can be read via this commit.

Read more of this story at Slashdot.

Categories: Linux fréttir

Europe built sovereign clouds to escape US control. Then forgot about the processors

TheRegister - 5 hours 56 min ago
FEATURE Can digital sovereignty exist on American silicon? Europe is pouring more than €2 billion into sovereign cloud initiatives designed to reduce exposure to US legal reach. The EU's IPCEI-CIS program funds infrastructure development. France qualifies operators under SecNumCloud, a framework with nearly 1,200 technical requirements promising "immunity from extraterritorial laws." But most datacenters and qualified cloud operators still rely heavily on Intel or AMD processors. And inside those processors sits a computer beneath the computer: management engines operating at Ring -3, below the operating system, outside the control of host security software, persistent even when the machine appears powered off. Under the US Reforming Intelligence and Securing America Act (RISAA) 2024, hardware manufacturers count as "electronic communications service providers" subject to secret government orders. Europe's frameworks certify the clouds. They don't assess the silicon. The computer your OS can't see That computer beneath the computer has a name. On Intel processors, it is the Management Engine (ME), or more precisely the Converged Security and Management Engine (CSME). On AMD, it is the Platform Security Processor (PSP). Both run at what security researchers call Ring -3, below the operating system, below the hypervisor, in a privilege level the host cannot see or log. "It's a computer inside your computer," explains John Goodacre, Professor of Computer Architectures and former director of the UK's £200 million Digital Security by Design program. He is clear about what that means in practice. The ME has its own memory, its own clock, and its own network stack, and because it can share the host's MAC and IP addresses, any traffic it generates is indistinguishable from the host's own traffic to the firewall. The architecture is not theoretical. Embedded in the Platform Controller Hub, the CSME is a separate microcontroller that operates independently of the host, with direct memory, device access, and network connectivity the host operating system cannot monitor. AMD's PSP works the same way. Intel's Active Management Technology (AMT), the remote management feature the ME enables, exposes at least TCP ports 16992, 16993, 16994, and 16995 on provisioned devices. Goodacre notes that an attack surface exists on unprovisioned hardware too. These ports deliver keyboard-video-mouse redirection, storage redirection, Serial-over-LAN, and power control to administrators managing fleets of devices remotely. The capability has legitimate uses. It also provides a channel that operates at a level below what European sovereignty frameworks can attest. Microsoft documented in 2017 that the PLATINUM nation state actor used Intel's Serial-over-LAN (SOL) as a covert exfiltration channel. SOL traffic transits the Management Engine and the NIC sideband path, delivered to the ME before the host TCP/IP stack runs. The host firewall and endpoint detection saw nothing, and any security tooling running on the compromised machine itself was equally blind. PLATINUM did not exploit a vulnerability. It exploited a feature, requiring only that AMT be enabled and credentials obtained. In documented cases, those credentials were the factory default: admin, with no password set. Goodacre catalogues this and related scenarios in a 37-page risk assessment prepared for CISOs evaluating Intel vPro hardware connected to corporate networks. Its conclusion is blunt: connecting an untouched-ME device to corporate resources "exposes the organization to a class of compromise that defeats the host security stack in its entirety." The ME does not stop when the machine appears to. Users recognize the symptom: a laptop powered off and stored for weeks is found, on next boot, to have a depleted battery. On modern thin and light platforms, what Microsoft documents as Modern Standby means "off" does not correspond to "all subsystems unpowered." The system-on-chip components the Management Engine runs on remain in low-power states, drawing enough to drain a 55 Wh battery over weeks, on the order of 100-200 mW continuous draw. The implication is documented in Goodacre's risk assessment: "Whether the radio is in a Wake-on-Wireless-LAN listening state is firmware policy. On a device whose firmware has been tampered with during transit through the supply chain, the answer cannot be inferred from the visible power state." A laptop that appears off, in a bag, can associate with a hostile network the user has no knowledge of. Professor Aurélien Francillon, a security researcher at French engineering school EURECOM, has spent years studying exactly this class of problem. Working with colleagues, he built a fully functional backdoor in hard disk drive firmware [PDF], a proof of concept demonstrating how storage devices could silently exfiltrate data through covert channels. Three months after presenting it at an academic conference, the Snowden disclosures revealed the NSA's ANT catalogue, which documented an identical capability already deployed in the field. "The NSA were already doing it," Francillon says flatly. "Quite amazing." That background informs his assessment of the ME. "Yes, it can probably be used as a backdoor, like many other things, including BMC [baseboard management controller] and many other firmwares," he says. The question, he argues, is not whether the backdoor exists but whether operational controls make it unreachable in practice. AMD faces the same architectural question. On April 14, 2026, researchers demonstrated the Fabricked attack against AMD's SEV-SNP confidential computing technology, achieving a 100 percent success rate with a software-only exploit. The Platform Security Processor proved vulnerable to the same class of compromise. On server hardware, the picture is the same. Intel ME runs on servers under a different name, Server Platform Services or SPS, and the BMC, the remote administration controller standard in datacenter hardware, relies on it. "More or less the same," Francillon says of the server variant. For datacenter operators, he sharpens the focus further: "If I look at cloud systems, servers, I would be more concerned with the BMC," pointing to published research demonstrating remote exploitation of BMC vulnerabilities that could allow an attacker to reinstall or fully compromise a server. The BMC is not a separate concern from the ME: on server hardware, it is the primary network entry point into the SPS, making it both the most exposed interface and the most consequential. Both Intel and AMD processors contain management engines that operate below the operating system. The silicon is designed by American companies and subject to American legal process. The backdoor the CLOUD Act doesn't use That legal process has teeth that most European policymakers underestimate. The CLOUD Act, passed in 2018, gave US authorities extraterritorial reach to data held by American companies. FISA Section 702 allows intelligence agencies to compel US persons and companies to provide access to communications. Both are well known in European sovereignty discussions. They operate through the front door: a legal order served on a company that controls data. Less well known is RISAA 2024, a law that opens a different entrance entirely. RISAA amended FISA's definition of "electronic communications service provider" in ways that go beyond cloud operators and platform companies, and beyond the bilateral agreements that European policymakers have built their legal defenses around. Hardware manufacturers now fall within scope. Intel and AMD can be compelled, via secret orders with gag clauses, to cooperate with US intelligence access. The mechanism through which that access could be exercised is the management engine: a persistent, privileged, network-connected runtime that operates below anything the host operating system can see or block. A SecNumCloud-certified operator can be legally isolated from American data demands. The processor inside its servers cannot. "You've actually got a policy mechanism by which any such machine anywhere can deliver any of its information," Goodacre says. RISAA's two-year term expired on April 20, 2026, but Congress extended it by 45 days while debating reforms. Whether it is renewed, amended or allowed to lapse, the architecture it targets does not change. SecNumCloud's blind spot France's SecNumCloud is Europe's most rigorous attempt to build a cloud certification that is legally immune to American law. It did not emerge from nowhere. ANSSI, France's national cybersecurity agency, was established in 2009 as part of a broader effort to build institutional muscle on digital sovereignty long before the term became fashionable. When Edward Snowden revealed the scale of NSA surveillance in 2013, France's response was technical rather than rhetorical: ANSSI published the first SecNumCloud framework in July 2014. A decade later, that framework has grown to nearly 1,200 technical requirements. At the time, SecNumCloud was a cybersecurity qualification, not a sovereignty instrument: it set requirements for architecture, encryption standards, access controls, and incident response, but said nothing about who controlled the underlying infrastructure or whose laws applied to it. The CLOUD Act changed that. Passed in 2018, it gave American authorities extraterritorial reach to data held by US companies, and suddenly a French cybersecurity framework had a geopolitical dimension it was not designed for. Version 3.2, introduced in 2022, added Chapter 19: a set of explicit requirements targeting extraterritorial law, mandating that only EU operators could run the service, that no non-EU party could access customer data, and that the provider could operate autonomously without external intervention. It promised "immunity from extraterritorial laws." In December 2025, S3NS, a joint venture between French defense and technology group Thales and Google Cloud, operating Google Cloud Platform technology under French control, became the first "hybrid" cloud to receive SecNumCloud qualification. The certification triggered heated debate: was this real sovereignty, or American technology with a European flag? But the debate missed a more fundamental question. Does SecNumCloud's certification reach as far as the silicon it runs on? Francillon is positioned to see both sides of that question. He sits on the French Technology Academy's working group on cloud security, a body that advises on the technical foundations of frameworks like SecNumCloud. And he has spent years studying firmware backdoors in academic literature and demonstrated them in practice. He knows what the hardware can do, and he knows what the certification requires. His starting point is that SecNumCloud provides genuinely valuable protection, and that the silicon gap does not negate that. When asked whether SecNumCloud explicitly addresses Intel Management Engine or AMD Platform Security Processor vulnerabilities, his answer is unambiguous: "There is no direct requirement for firmware backdoor prevention." The framework is not designed to be a technical specification for hardware-layer security. "The document aims to be generic and not dive into technical details," Francillon says. "Most of it is organizational security." What SecNumCloud does require is that providers build a proper threat model, consider mitigation mechanisms, and monitor administration gateways where external tech support could be exploited. The hardware layer was not addressed by oversight. It was left out by design. Francillon's assessment is not a fringe view. Vincent Strubel, the director of ANSSI, the very agency that designed and administers SecNumCloud, is equally explicit about what the framework does and does not cover. In a January 2026 LinkedIn post addressing SecNumCloud's scope, he writes that all cloud offerings, hybrid or not, depend on electronic components whose design and updates are not 100 percent controlled in Europe. If Europe were ever cut off from American or Chinese technology, he argues, the result would be a global problem of security degradation, not just in hybrid clouds, but everywhere. Strubel frames SecNumCloud carefully: it is "a cybersecurity tool, not an industrial policy tool." It protects against extraterritorial law enforcement and kill-switch scenarios. It was never designed to eliminate technology dependencies at the hardware layer, and no actor, state, or enterprise fully controls the entire cloud technology stack anyway. One technology frequently cited in sovereignty discussions is OpenTitan, Google's open source secure element deployed on its server hardware and used within the S3NS infrastructure. Francillon is clear about what it is and, critically, what it is not. "OpenTitan is a secure element, a small chip on the side that can be used for protecting sensitive keys, providing signatures, making attestations," he explains. "It's a bit like a TPM." What it is not is a replacement for the main processor. "The Linux and all your applications will not run on it." OpenTitan sits alongside x86 infrastructure as an external root of trust, independent of the ME. That matters because the default embedded TPM lives inside the ME, making it subject to the ME attack surface. OpenTitan sits outside that boundary. The two address different problems entirely, and conflating them, as sovereignty advocates sometimes do, obscures where the residual exposure actually lies. ANSSI's own technical position paper [PDF] on confidential computing, published in October 2025, concludes that Intel SGX, TDX, and AMD SEV-SNP are "not sufficient on their own to secure an entire system, or to meet the sovereignty requirements of SecNumCloud 3.2." Physical attackers are "explicitly out-of-scope" of vendor security targets. Supply chain attackers are "explicitly out-of-scope." The ME attack surface discussed in this article falls into neither category: it is a remote network threat, not a physical one. The paper's conclusion for users concerned about hostile cloud providers is stark: "Switch to a cloud provider they trust, or use their own hardware with physical security protection measures." The castle with a structural flaw Francillon does not dispute that SecNumCloud leaves the ME unassessed. His argument is that this does not matter in practice. "What I mean is that if there is a backdoor to access a room, it cannot be directly used if the room is in a castle. You have to pass the castle walls first." Network isolation, monitoring, and threat modeling are the walls. SecNumCloud's operational requirements mandate that administration gateways be isolated, that external tech support be monitored, that network segmentation prevents lateral movement. The Management Engine backdoor may exist, but the framework makes it unreachable except in what Francillon calls "very high-end attacks." That qualifier matters. Francillon is not claiming perfect security. He is claiming that proper operational controls reduce the threat to a level where only nation state actors with significant resources could exploit it. For most threat models, he argues, that is sufficient. "Saying it is useless to do SecNumCloud because there is ME, or whatever backdoor in some hardware we don't control, is a mistake," he says. SecNumCloud improves security over deployments without such controls, he argues, provided that hardware is carefully evaluated and firmware securely configured. The castle walls have a structural flaw that Goodacre's risk assessment documents in detail. Corporate perimeter firewalls see the device's traffic, but because the ME shares the host's MAC and IP addresses, they cannot tell ME-originated flows apart from legitimate host traffic. "The perimeter cannot attribute a flow to host-versus-CSME origin without out-of-band knowledge," Goodacre writes. A TLS-encrypted tunnel from the ME to an attacker server on port 443 looks, to the perimeter, like any other HTTPS connection the laptop makes. Network filtering reduces attack surface. It does not eliminate the exposure. Goodacre's position is that a "Tier-3 supply-chain residual remains in both cases and is the irreducible cost of buying any silicon that ships with a Ring -3 manageability engine." He defines Tier 3 as nation state cyber services operating at the level of compromising firmware in transit, mis-issuing CA certificates via in-country authorities, and modifying hardware at customs or courier hubs. The NSA's Tailored Access Operations division treated supply chain interdiction as routine business, with explicit doctrinal preference for BIOS and firmware implants over disk-level malware. His risk assessment's data on fleet vulnerability is concrete. Industry telemetry from Eclypsium, analyzing production enterprise environments, found that approximately 72 percent of devices observed remained vulnerable to INTEL-SA-00391 years after public disclosure, and 61 percent remained vulnerable to INTEL-SA-00295. The same reporting documented that the Conti ransomware group developed proof-of-concept Intel ME exploit code with the intent of installing highly persistent firmware-resident implants. "Connecting an untouched-ME vPro laptop to corporate resources exposes the organization to a class of compromise that defeats the host security stack in its entirety," Goodacre concludes. "The exposed controls include BitLocker full-disk encryption, FIDO2-protected sign-in, endpoint detection and response, the host firewall and the corporate VPN." The disagreement between Francillon and Goodacre is not about whether the vulnerability exists. Both confirm it does. Both confirm AMD faces the same issue. Both confirm software alone cannot fix it. The disagreement is about whether operational controls, Francillon's castle walls, make an architectural backdoor irrelevant in practice, or merely reduce its exploitability while leaving nation state actors with a path through. For SecNumCloud operators processing sensitive government or commercial data, the distinction is not academic. It is worth noting that SecNumCloud is designed for a higher level of security than standard cloud certifications, but is not intended for classified or restricted government data. The threat that can still slip through Francillon's castle walls is precisely the threat SecNumCloud was designed to keep out. The gap nobody names Goodacre told The Register he tested awareness of the Management Engine with various attendees at the CyberUK conference in April 2026. "Almost no one" knew about it, he reports. The gap between the sovereignty rhetoric and the silicon reality is not being surfaced in policy discussions, procurement decisions, or public debate over what digital sovereignty means. The debate that does happen, hybrid versus non-hybrid, Google/Thales versus pure European providers, focuses on operational control and legal structure. It does not address the shared silicon foundation. Strubel's LinkedIn post pushes back against the framing: "Imagining this problem is limited to hybrid cloud offerings is pure fantasy that doesn't survive confrontation with facts." Every cloud provider, hybrid or not, depends on components they don't fully control. The distinction isn't hybrid versus sovereign. It is what you're protecting against, and whether the controls you're implementing address that threat. There is no immediate solution. RISC-V, the open source processor architecture European sovereignty advocates point to as a long-term alternative, remains years from competitive performance in datacenter workloads. "It will take decades," Francillon says flatly. Arm is a cautionary precedent: it took nearly 20 years from the first server attempts before Arm achieved any meaningful datacenter presence. Can sovereignty exist on compromised silicon? For Goodacre, the bottom line is simple: the Tier-3 supply chain residual is "the irreducible cost of buying silicon with a Ring -3 manageability engine." Francillon argues that operational controls, including network isolation, monitoring, and threat modeling make the backdoor unreachable except in very high-end attacks. Strubel acknowledges hardware dependencies are real but maintains that SecNumCloud provides valuable protection for what it does cover: legal control, kill-switch resistance, defense against cyberattacks and insider threats. The disagreement is not about technical facts. It is about risk tolerance and threat model calibration. For European CIOs choosing SecNumCloud-certified providers, the question to ask vendors is: how do you address Intel Management Engine and AMD Platform Security Processor in your threat model? The answer will clarify whether the vendor treats the hardware layer as out of scope, or has implemented controls that reduce but do not eliminate the exposure. For European policymakers, the question is broader. Can digital sovereignty exist on non-sovereign silicon? The current frameworks do not answer that question. They certify operational controls, legal structure, and autonomous execution capability. They do not certify silicon-layer immunity, because the hardware is American or Chinese, subject to American or Chinese law, designed with management engines that European authorities did not specify, cannot legally compel on their own terms, and cannot replace. Whether that is a gap worth addressing, or a risk worth accepting as the unavoidable cost of participating in global technology supply chains, is a question Europe will need to answer for itself. ®
Categories: Linux fréttir

One in seven Brits swapped their GP for ChatGPT, study finds

TheRegister - 7 hours 53 min ago
Brits are now asking chatbots about mysterious lumps and weird rashes instead of calling their GP, which is probably not the digital healthcare revolution anybody meant to build. A new study from King's College London found that one in seven people in the UK have used AI instead of contacting a doctor or healthcare service, while one in ten said they had turned to chatbots rather than professional mental health support. Convenience was the biggest reason, cited by 46 percent of respondents, closely followed by curiosity at 45 percent. Another 39 percent said they used AI because they were unsure whether their symptoms were serious enough to bother a GP in the first place. The report, based on a survey of more than 2,000 adults, suggests that AI systems are quietly becoming Britain's unofficial second-opinion service while regulators are still arguing about what counts as "AI-enabled healthcare" in the first place. However, some respondents said the chatbot conversations ended up replacing medical care altogether. Around one in five respondents said chatbot advice discouraged them from seeking professional help, and 21 percent said they skipped contacting a healthcare provider because of something the AI told them. Public confidence in AI healthcare also looks shaky. The survey found Britons are almost perfectly split on whether AI should be involved in clinical decision-making, with 37 percent supporting its use and 38 percent opposing it. Safety and accuracy worries topped the list of public concerns about NHS AI use. Women, in particular, were less comfortable with the idea than men, and far more likely to say patients should be told when AI is involved in their care. Oddly, younger adults were among the most skeptical. Nearly half of 18 to 24-year-olds opposed clinical AI use, compared with 36 percent of people over 65. The public also appears to think AI has already taken over GP surgeries to a much greater extent than is the case. Respondents guessed that around 39 percent of GPs use AI in clinical decision-making, when the actual figure is closer to 8 percent. Professor Graham Lord, executive director at King's Health Partners, warned that responsibility for AI mistakes often lands on clinicians even when they have little control over the systems being deployed. "When something goes wrong with AI, responsibility is often placed on clinicians, even where they have limited control over how AI tools are introduced," Lord said. Which sounds suspiciously like someone in healthcare has already seen the incoming paperwork. ®
Categories: Linux fréttir

Japan Runs Out of Robot Wolves In Fight Against Bears

Slashdot - 9 hours 26 min ago
Japan's worsening bear problem has created a shortage of handmade "Monster Wolf" robots, which are $4,000 solar-powered scarecrow-like devices with glowing eyes, sensors, and blaring sounds designed to frighten the animals away. "We make them by hand. We cannot make them fast enough now. We are asking our customers to wait two to three months," company president Yuji Ohta recently told the AFP. Popular Science reports: First released in 2016 by the manufacturer Ohta, Monster Wolf was originally designed to ward off the agricultural foes like boars, deer, and the island nation's Asian black bear (Ursus thibetanus) and brown bear (Ursus arctos) populations. The creative solution quickly went viral for its red LED eyes and menacing fangs -- as well as its admittedly odd, furry pipe frame. Starting at around $4,000, each bespoke Monster Wolf is now equipped with battery power, solar panels, and detection sensors. Its speakers are programmed with over 50 audio clips including human voices and sirens audible over half a mile away. These aren't assembly line products, however. Each Monster Wolf is custom made, and Ohta simply can't keep up with the current demand. [...] Ohta told the AFP that amid the ongoing crisis, there has been "growing recognition" that Monster Wolf is "effective in dealing with bears." The main customer base remains farmers, but orders are also coming from golf courses and rural workers. Upgraded versions will soon include wheels to actually chase animals and patrol preset routes. There are also plans to release a handheld version for outdoor enthusiasts and schoolchildren. Until Ohta catches up with its orders, residents and visitors are encouraged to review the Japanese government's own bear safety tips.

Read more of this story at Slashdot.

Categories: Linux fréttir

Wood Burning Is Reintroducing Lead Pollution Into the Air, Scientists Find

Slashdot - 12 hours 56 min ago
An anonymous reader quotes a report from The Guardian: Wood heating is reintroducing lead into the air of local communities and homes, a systematic investigation by academics has found. Overwhelming evidence of lead's neurotoxicity meant the metal was banned as an additive in petrol more than 25 years ago. The research by academics from the University of Massachusetts Amherst began by analysing samples of particle pollution from five suburban and rural towns in the north-east US. They looked for tiny particles of potassium that are given off when wood is burned and also particles containing lead. Samples from seven winters revealed associations between potassium and lead. When there were more wood burning particles in a daily sample, there was more lead in the air, with clear straight-line relationships in four of the five towns. The project was extended to 22 other towns across the US. The relationships between lead and potassium varied from place to place, being strongest in the Rocky Mountains. By factoring in the effects of temperature, moderate to strong associations in their analysis strengthened the conclusion that the extra lead came from wood burning. The lead concentrations were less than the US legal limits, but any exposure to the metal is harmful. [...] Although less than legal limits, lead particles are routinely measured in UK cities in winter when people are also burning wood. This is normally attributed to waste wood covered with old lead paint, but the Umass Amherst study suggests the metal is coming from the wood itself. This means that any wood burning could increase exposure in neighborhoods and at home. Tricia Henegan, a PhD student at Umass Amherst and the first author on the research, said: "The most logical answer [to the question of how lead ends up in wood] is that it comes from uptake in the soil, probably riding along with the nutrients and water that trees need. Once in the tree, it deposits in the tree's tissues and remains until that tree is burned." Other research has found that it can then become part of the smoke. "The use of wood as an energy source is a relic of the past, one that should not be relived if given a choice. Although wood fuel use can feel nostalgic, it does have negative consequences on air quality, and therefore public health."

Read more of this story at Slashdot.

Categories: Linux fréttir

Kioxia and Dell Cram Nearly 10PB Into a Single 2U Server

Slashdot - Fri, 2026-05-15 23:00
BrianFagioli writes: Kioxia and Dell Technologies say they have built a 2U server configuration capable of scaling to 9.8PB of flash storage, which is the sort of density that would have sounded impossible just a few years ago. The setup combines a Dell PowerEdge R7725xd Server with 40 Kioxia LC9 Series 245.76TB NVMe SSDs and AMD EPYC processors. According to Kioxia, matching the same capacity with more common 30.72TB SSDs would require seven additional servers and another 280 drives. The companies are pitching the hardware squarely at AI and hyperscale workloads, where storage is rapidly becoming a bottleneck alongside compute. Kioxia claims the denser configuration can dramatically reduce power consumption and rack space requirements while remaining air cooled. The announcement also highlights how quickly enterprise storage capacities are escalating as organizations race to support larger AI models, massive datasets, and increasingly demanding data pipelines.

Read more of this story at Slashdot.

Categories: Linux fréttir

AMD Is Bringing Improved FSR 4 Upscaling To Its Older GPUs

Slashdot - Fri, 2026-05-15 22:00
AMD says FSR 4.1 will finally bring its newer hardware-accelerated upscaling technology to older Radeon GPUs. "The rollout will begin in July with RDNA3- and 3.5-based GPUs, which include the Radeon RX 7000 series, as well as integrated GPUs like the Radeon 890M and Radeon 8060S," reports Ars Technica. "In 'early 2027,' support will also be extended to the RDNA2 architecture, which includes the Radeon RX 6000 series, integrated GPUs like the Radeon 680M, and the Steam Deck's GPU. This would also open the door to supporting FSR 4 on the PlayStation 5 and Xbox Series X and S, all of which also use RDNA2-based GPUs." From the report: [AMD Computing and Graphics SVP Jack Huynh's] short video presentation didn't get into performance comparisons, but did mention that AMD had to work to get FSR 4's superior hardware-backed upscaling working on its older graphics architectures. RDNA4 includes AI accelerators that support the FP8 data format in the hardware, and porting FSR 4 to older GPUs meant getting it running on the integer-based INT8 hardware in the RDNA3 and RDNA2-based GPUs. This may mean that FSR 4.1 running on an RDNA3 or RDNA2-based GPU may come with a larger performance hit relative to RDNA4 cards, or that image quality may differ slightly. Modders have already worked to get FSR4 working on INT8-supporting GPUs, and the older GPUs reportedly see a 10 to 20 percent performance hit relative to FSR 3.1 running on the same hardware. AMD's official implementation may or may not improve on these numbers. [...] Any games that support FSR 4 should be able to support FSR 4.1 running on Radeon 7000-series cards; users will presumably be able to install a driver update in July that enables the new feature. Games that support the older FSR 3.1 can also be forced to use FSR 4 in the Radeon graphics driver.

Read more of this story at Slashdot.

Categories: Linux fréttir

Google reimburses Register sources who were victims of API fraud

TheRegister - Fri, 2026-05-15 21:26
Two of the Google Cloud developers who were hit with bills for thousands of dollars following unauthorized API calls to Gemini models have had their bills reversed, the users told The Register in recent days. But Google plans to continue automatically expanding users' spending limits, leaving them and countless other customers vulnerable to bills they cannot afford, whether from fraud or a sudden traffic surge. Australia-based developer Isuru Fonseka – whose usage bill skyrocketed to $17,000 in minutes after Google automatically upgraded his $250 spending tier when a hacker took control of his account – told us that he was happy to put this behind him. “It’s so good. It felt like they were just giving me the run around until your article. I just hope they fix it properly for everyone,” he said. “It’s great that the article was able to get the refund but it’s sad that it had to go to that level for them to process it urgently.” Despite refunding his money, Google seems to have lost a customer. Fonseka said that he has since ensured his API cannot be used with Google’s stable of AI products, and will likely try one of the independent foundation models if he needs those features. “I’ve disabled Gemini on everything – if I ever plan to use AI on my projects, I’m better off using it via a different service such as OpenRouter or going directly to one of the other LLM providers – just as a way to keep Gemini out of my account and the risk as low as possible,” he said. Fonseka said he was blindsided by a Google policy that allowed the company to automatically upgrade a user’s billing tier without permission or adequate warning. He had thought by signing up for a user tier with a $250 spending cap that his bills would be restricted to that amount. It was only after attackers exploited his API key that he learned Google would upgrade the cap automatically based on his history of spending. While Google acknowledged that the automatic tier upgrades allowed credential hijackers to rack up thousands of dollars in bills in cases like the one Fonseka described to The Register, it said it has not reconsidered the policy. In a statement to The Register, Google said that it wants to prioritize access to Google Cloud services without interruption, preferring to prevent service outages over respecting users' budget preferences. “With our automated growth tiers, we helped businesses scale as usage increased, built on their historic reputation of payments and usage,” a Google spokesperson told us in a statement. “This prevents their business having a hard service outage once they pass an artificial system quota.” Tiers vs spending caps There is some confusion between Google's usage tiers and its newly introduced spending caps, and Google’s documentation hasn't helped much. Google says its users can set their usage tiers not to exceed a certain spending level. For example the maximum spending allowed by a Tier 1 user like Fonseka is $250. However, if the account is older than 30 days and if, over the lifetime of their work with Google, they have spent at least $1,000, then Google will automatically allow that account to spend up to $100,000. So good customers have the most to fear from fraud or from an unexpected spike in usage. In several cases shared on social media, Google users were only aware of this after their credit cards were billed thousands of dollars. On April 22, Google introduced a trial of hard caps on spending within Google Cloud, but those are in a preview and are approved on a case-by-case basis. "We’re excited to announce that Spend Caps are coming soon to Google Cloud. Designed to work with Google Cloud Budgets, FinOps and DevOps can set budgets that enforce automated cost boundaries (caps) at the project level for AIS, Agent Platform, Cloud Run, Cloud Run Functions, and Maps," Google wrote. "These caps alert and ultimately pause API traffic once your set budget is reached, but leave your resources intact. If you need the traffic to resume, simply suspend the Spend Cap." Spend caps can only be set per project for a single, eligible service, Google said. Eligible services for this preview include Gemini API, Agent Platform (previously known as VertexAI), Cloud Run, Cloud Run Functions, Maps, Google said. Users who apply for a spending cap will have their submissions reviewed on a “one to two week basis” and customers are added in the order they submitted. “Once onboarded, you will receive an email with instructions on how to access the feature as well as details on how to submit feedback,” Google writes in its sign up page. Rod Danan, CEO of Prentus, a company that helps job applicants with interview preparation and tracks job placements for universities, told The Register earlier this week that he saw his bill skyrocket to $10,000 in just 30 minutes of usage by attackers who exploited his public API key. Google forgave the charges on Thursday, he said. “They got back to me today agreeing to a refund,” he told us. “It's definitely relieving. You want to focus on the business. You don't want to have to focus on going and getting refunds from some crazy charges.” He said the stress of running a startup is hard enough without the addition of fighting one of the largest companies in the world imposing erroneous five-figure charges. “I'm happy that it's behind me. I wish it was easier,” he said. “I've learned, yeah, definitely don't give up. Be annoying whenever something is wrong and just keep pushing. Again, try to make it as public as possible, get louder and louder until the people you need to hear you actually hear you.” Google said any unauthorized use of API keys will be investigated and it historically has treated customers compassionately when there is clear evidence of fraud or error. “We take reports of credential abuse and the financial security of our customers extremely seriously; and as you know are investigating these specific cases you have pointed to and we will work directly with any impacted users to resolve charges resulting from fraudulent activity,” Google said. ®
Categories: Linux fréttir

Datacenters slurping up so much juice they boosted prices 75% in largest US energy market

TheRegister - Fri, 2026-05-15 21:02
Prices in the United States' largest wholesale power market have nearly doubled in the past year thanks to demand from datacenters. And an independent watchdog predicts things will only get worse without some serious changes. The PJM Interconnection serves all or parts of 13 states and the District of Columbia in the eastern US, including Northern Virginia, that’s got the densest cluster of datacenters in the world. The surge in wholesale power costs across PJM was outlined on Thursday by Monitoring Analytics, a firm that serves as the official market monitor for the Interconnection, in its Q1 2026 state of the market report. According to the report, the total cost per megawatt-hour (MWh) of wholesale power rose from $77.78 in the first three months of 2025 to $136.53 in the same period this year, an increase of 75.5 percent year over year. Monitoring Analytics didn’t mince words in its report, identifying datacenter load growth as the main driver of recent capacity market conditions and rising prices in PJM. “Data center load growth is the primary reason for recent and expected capacity market conditions, including total forecast load growth, the tight supply and demand balance, and high prices,” the report reads. “But for data center growth, both actual and forecast, the capacity market would not have seen the same tight supply demand conditions.” As for what might come next, the report doesn’t ignore the likely outcome of the current situation, either. “The price impacts on customers have been very large and are not reversible,” the report states, but the bad news doesn’t stop there. “The price impacts will be even larger in the near term unless the issues associated with data center load are addressed in a timely manner.” Based on the rest of the report, a timely resolution to the datacenter load issue shouldn’t be expected, at least not in a way that’ll benefit locals. For starters, Monitoring Analytics found that - like pretty much everywhere right now - power grids aren’t ready for the datacenter boom. PJM has taken steps to upgrade its power commitment and dispatch software to better operate its grid, but planned upgrades have been delayed multiple times with no planned implementation date on the calendar, per the report. “The current supply of capacity in PJM is not adequate to meet the demand from large data center loads and will not be adequate in the foreseeable future,” Monitoring Analytics asserted. Current plan: Shift the risk to everyone else PJM has been planning a one-time backstop auction to procure new power generation for datacenter projects in the region at the request of the Trump administration and the governors of the states it serves, but Monitoring Analytics isn’t convinced the Interconnection is going about the process in the right way. The currently proposed auction structure, says the watchdog, would “generally shift significant risk to other PJM customers,” which is a temptation the group says “should be resisted.” “Other PJM customers, whether residential, commercial or industrial, should not be treated as a free source of insurance, or collateral, or financing for data centers,” the report continued. “Yet that is what most of the proposals related to a backstop auction actually do.” As for what PJM ought to be doing, you probably won’t need to rack your brain to figure that out: Monitoring Analytics says datacenters ought to be required to bring their own power. Such a rule, says the group, should include fast-track options for interconnection for BYOP datacenters, and otherwise a queue that would only connect datacenters when there is adequate capacity to serve them. “This broad bring-your-own new generation solution to the issues created by the addition of unprecedented amounts of large data center load does not require a continued massive wealth transfer through ongoing shortage pricing,” the analysts argue. When asked for its response to the problems raised by the Monitoring Analytics report, PJM told us that it was fully aware of the impact of electricity cost increases on its customers. “PJM is working with states and member companies to address these consumer impacts on multiple fronts, including extending market caps put in place since the 2025/2026 auction, authorizing multiple transmission expansion projects that are now in development, and reforming wholesale electricity market rules,” the Interconnection told us. Monitoring Analytics didn’t respond to questions. Americans have become increasingly hostile to new datacenter projects driven by the AI boom, with 71 percent of respondents to a Gallup survey saying they opposed DC projects in their neighborhoods. Projects in multiple states have been abandoned recently due to pushback from locals, many of whom are concerned not only with electrical price increases, noise, and eyesores, but environmental harm as well. ®
Categories: Linux fréttir

Bitwarden Scrubs 'Always Free' and 'Inclusion' Values From Its Website

Slashdot - Fri, 2026-05-15 21:00
Bitwarden appears to be undergoing a quiet shift in leadership and messaging. Its longtime CEO and CFO have stepped down, while the company has removed "Always free" from a prominent password-manager page and replaced "Inclusion" and "Transparency" in its GRIT values with "Innovation" and "Trust." Fast Company reports: In February, longtime CEO Michael Crandell moved to an advisory role, according to LinkedIn, with no announcement from the company. His replacement, Michael Sullivan, former CEO of both Acquia and Insightsoftware, touts his experience with "all facets of mergers and acquisitions" on his own LinkedIn page, including experience working with leading private equity firms. CFO Stephen Morrison also left Bitwarden in April, replaced by former InVision CEO Michael Shenkman. Both Crandell and Morrison joined the company in 2019. Kyle Spearrin, who started Bitwarden as a fun hobby project in 2015, remains the company's CTO. Meanwhile, Bitwarden has made some subtle tweaks to its website. The page for its personal password manager no longer includes the phrase "Always free." Previously this appeared under the "Pick a plan" section partway down the page, but that section no longer mentions the free plan, though it remains available elsewhere on the page. Bitwarden made this change in mid-April, according to the Internet Archive. Bitwarden has also stopped listing "Inclusion" and "Transparency" as tentpole values on its careers page. The company has long defined its values with the acronym "GRIT," which used to stand for "Gratitude, Responsibility, Inclusion, and Transparency." After May 4, it changed the acronym to stand for "Gratitude, Responsibility, Innovation, and Trust." The phrase "inclusive environment" still appears under a description of Gratitude, while "transparency" is mentioned under the Trust heading. They're just no longer the focus.

Read more of this story at Slashdot.

Categories: Linux fréttir

Git is unprepared for the AI coding tsunami

TheRegister - Fri, 2026-05-15 20:15
Last month, Mitchell Hashimoto, HashiCorp co-founder, publicly declared that he was moving his popular open source Ghostty terminal emulator project from GitHub. GitHub runs the world’s largest service built on the Git distributed version control system, created by Linus Torvalds. Once an enthusiastic user, Hashimoto grew disillusioned with service disruptions, and increasingly slow pull requests. “This is no longer a place for serious work if it just blocks you out for hours per day, every day,” he wrote. Hashimoto was quick to defend Git itself: “The issue isn't Git, it's the infrastructure we rely on around it: issues, PRs, Actions, etc.” Many have blamed GitHub’s performance on Microsoft, which acquired the company in 2018. But to be fair, GitHub itself has been experiencing heavier-than-expected traffic thanks to a proliferation of AI-generated pull requests. In 2025, GitHub saw a 206 percent year-over-year growth in AI-generated projects measured by the use of Bash shell scripts, a widespread way of running agents. And more AI code means more bugs. Research from GitClear found that AI-generated code heaped 10.83 issues per pull request, compared to 6.45 for the old-fashioned human variety. Our new agentic workforce is raising big questions about how the entire software development lifecycle (SDLC) should evolve, and if Git should come along. “Agents are nudging us toward a continuous flow,” warned Peco Karayanev, co-founder of DevOps platform provider Autoptic, which bridges Git-based deployments with observability tools for agent-based remediation. Autoptic’s entire user base runs on some form of Git, either homebrew or from a service provider like GitLab. Given the volume and magnitude of changes across repos, “we need git to start operating in a more continuous mode,” Karayanev wrote in an email interview. Git operations, especially when used in GitOps-style automated deployments, still need to be managed by people. Updates, commits, pushes, merges are often yoked into sequences of “stop/go” episodes where someone has to hit enter on the keyboard a few times to continue the workflow, Karayanev noted. This model may not hold up once agents start getting priority. A butler for Git Git has always had its share of critics, especially those who use the tool daily. There may not be another piece of software that is so widely adopted and yet so inscrutable. Torvalds and other Linux kernel developers built Git in 2005 after frustrations with trying to shoehorn Linux code into the commercial BitKeeper tool. Linux, a global group project of mammoth proportions, required a distributed version control system able to support non-linear development of thousands of parallel branches. Like any distributed system, Git can be difficult to understand. One of the co-founders of GitHub, Scott Chacon co-wrote a book on using Git (2009’s Pro Git) and still he finds himself occasionally flummoxed by the version control system. There are still “sharp edges” to Git, Chacon told The Register. “There's a lot of stuff that it doesn't do very well from a usability standpoint,” he said. Chacon co-founded GitButler as a way to “rethink the porcelain” of Git, to make Git more suitable to modern workflows. (Last month, GitButler received $17 million in venture capital funding). Think of GitButler as a super-powered Git client. It allows the developer to work on two different branches simultaneously, using a technique called virtual branching. It reconciles the code a developer is working on with the upstream code. They can reorder commits, or edit the comments of a previous commit. It offers richer metadata about the files being worked on. It can show which commits are unique to that branch. Best of all, it eliminates what many developers call “rebase hell,” where merges into an updated codebase must be checked one at a time, a problem GitButler solves by keeping the user’s code synchronized with what is upstream. Many of these actions GitButler offers can be done through the Git command itself – although Git’s command language, and its rules, can be so obtuse that “you will probably make a mistake at some point,” Chacon said. A Git for agents Chacon believes GitHub’s current reliability issues stem from the current tsunami of agentic work. This is “ironic” because GitHub was built to scale Git, he said. “But an influx of agents is pushing the service to the brink.” The problem lies not with Git itself, but with everyone using one service, Chacon argued. Last year, GitHub had about 180 million users working across 630 million repositories – with 121 million created in 2025 alone, according to the company’s most recent annual Octoverse report. “From the longer-term perspective, it doesn't need to be like this,” he argued. Maybe Git should be run locally, mirrored globally and managed with clients … such as GitButler, Chacon suggested. Perhaps Git-based version control systems could be customized for specific industry verticals. We need to think about how we “distribute these systems more,” he said. “Git is designed to be distributed but we’re not distributing it,” he said. GitButler has created a command line interface specifically for agents. It was designed to give MCP servers an integrated map of the repository, which otherwise would require stitching together multiple Git commands. The Virtual Files concept allows the agent to work on a section of code that is also being worked on by a developer, or another agent. These are changes that point to a rethinking of how a Git workflow should run. “I think all of these systems should fundamentally change, because all of our workflows have changed, right? There needs to be different, sort of primitives for how to deal with these problem sets,” Chacon said. A tip from gaming development One company that wants its platform to replace Git altogether is Diversion, which has built an eponymous distributed version control system initially pitched for large-scale game design. “Git's architecture is actually an issue that prevents scaling,” argues Diversion CEO Sasha Medvedovsky in an interview with The Register. “Fundamentally it's an architecture problem that can't be fixed and is a bottleneck for end users and hosting services.” Git is a distributed system insofar as every user, or hosted service, requires a dedicated database (much like blockchain). “It's not distributed in the regular sense but rather replicated,” he wrote in an exchange with The Register on LinkedIn. Operations run on a single thread, making concurrent operations impossible. As a result, the larger the repository, the slower the commit operations – a deadly combination for fast-paced agentic software development, Medvedovsky noted. Of course, every CEO will have their talking points ready about a competitor’s weaknesses (Diversion is finalizing a blog post with hard numbers about Git and GitHub performance). But there are a growing number of other initiatives around prepping Git for the challenging times ahead. Perhaps most notable is Jujutsu, a Git-compatible distributed version control system, stewarded by Google senior software engineer Martin von Zweigbergk. Like GitButler, Jujutsu (jj) aims to eliminate a lot of the annoyances that come with Git. It includes an undo button and the ability to keep committing even when there is a conflict. And because everything written in C must be recast into Rust these days, long-time Git contributor Sebastian Thiel started a project called Gitoxide to rebuild Git in Rust. Potential benefits include significant performance improvements through multicore processing, and the much-needed memory safety that comes with Rust. Will Git 3 solve all the problems? Git’s chief maintainer is Junio Hamano, who took the reins from Torvalds in 2005. And he remains busy keeping Git current. At FOSDEM this February, core Git contributor and GitLab engineering manager Patrick Steinhardt discussed some of the changes coming in the next version of Git, version 3, which is gradually being rolled out this year. One of the chief improvements will be in the way Git manages the commit references, the IDs that point to each change being made. Surprisingly, this operation is a real bottleneck for the software. “The design is inefficient,” Steinhardt told the audience. Every time a programmer commits a code change, it gets recorded in a “packed-refs” file, which saves time by not giving each commit its own reference file. As projects grow larger, however, it takes longer for Git to amend or to delete a reference in packed-refs (One GitLab repo has a packed-refs file of more than 20 million references, Steinhardt said). This is especially problematic when you have multiple, simultaneous readers and writers of that file. And just forget about getting a consistent view of all the references. The freshly implemented Reftable feature, which will be the default in Git 3.0, stores references in an indexable binary format. The Git folks borrowed this concept from the Eclipse Foundation’s JGit Java implementation of Git. Reftable allows for block updates, eliminating the need to rewrite a 2 GB-sized file for a single entry. And it is much faster for reading, which would pave the way for Git supporting larger, more sprawling repositories – perfect for an ever-busy agentic workforce. For nearly two decades, Git has proved to be the version control system of choice for geeks worldwide. But even with these new features and various third-party enhancements, can it retain relevance for a new generation of agentically enhanced coders? The battle is on. ®
Categories: Linux fréttir

The Era of 15GB Free Gmail Storage Is Ending

Slashdot - Fri, 2026-05-15 20:00
Google has confirmed it is testing a 5GB storage limit for some new Gmail accounts, with users able to unlock the standard 15GB by adding a phone number. Android Authority reports: While the company didn't mention which regions are impacted, user reports from yesterday were mostly from African countries. That said, if Google's tests prove successful, this could possibly become the norm for new sign-ups in more regions. The company could be testing ways to discourage users from creating multiple Gmail accounts to access free cloud storage. However, if you already have a Gmail account with 15GB free storage, it shouldn't be impacted by this change. The language on Google's support page mentions "up to 15GB of storage." However, it's a recent change. An archived version of the support page from February did not use the words "up to." Whether the test has been running since early March or Google updated its language before it ever started the test, it's evident that the company could roll out the change globally as well.

Read more of this story at Slashdot.

Categories: Linux fréttir

AI agents show they can create exploits, not just find vulns

TheRegister - Fri, 2026-05-15 19:45
Sure, AI agents such as Mythos can find security vulnerabilities in software, but the bigger question is whether they can turn those flaws into functional exploits that work in the real world. After all, many AI-discovered bugs prove minor or difficult to weaponize. New research, however, suggests frontier models can indeed develop working exploits when directed to do so. To better understand the rapidly changing security landscape, computer scientists from UC Berkeley, Max Planck Institute for Security and Privacy, UC Santa Barbara, Arizona State University, Anthropic, OpenAI, and Google decided to build ExploitGym, a benchmark for evaluating the exploitation capabilities of AI agents. This is not an entirely disinterested set of investigators – Anthropic, OpenAI, and Google all sell AI services. And both Anthropic and OpenAI have talked up the risk of leading models Claude Mythos Preview and GPT-5.5 while selling access to government partners. Since Anthropic announced Mythos in early April, the security community has been critical of the company's approach, described by some as fear-mongering. And various security experts have made the case that even commercially available AI models can find security flaws. Nonetheless, Mythos and GPT-5.5 outshine their peers in ExploitGym, as described in the paper, "ExploitGym: Can AI Agents Turn Security Vulnerabilities into Real Attacks?" ExploitGym consists of 898 real vulnerabilities found in applications, Google's V8 JavaScript engine, and the Linux kernel. Its workout consists of presenting an AI agent with a vulnerability and proof-of-concept input that triggers it, to see whether the agent can create an exploit capable of arbitrary code execution. According to the UC Berkeley Center for Responsible Decentralized Intelligence, Mythos Preview successfully exploited 157 test instances and GPT-5.5 managed 120 in the allotted two-hour window. "Even when standard security defenses like ASLR or the V8 sandbox were turned on, a meaningful number of exploits still worked," the boffins wrote in a blog post. "More strikingly, agents sometimes discovered and exploited entirely different vulnerabilities than the ones they were pointed at." The agents (CLI + model) tested were Claude Code with Claude Opus 4.6, Claude Opus 4.7, Claude Mythos Preview, and GLM-5.1; Codex CLI with GPT-5.4/GPT-5.5; and Gemini CLI with Gemini 3.1 Pro. And even the ancient models released in February (Opus 4.6 and Gemini 3.1 Pro) had some success. The researchers say that one of their more interesting findings is that these models sometimes went "off-script" in capture-the-flag (CTF) environments, where an agent has to find and retrieve some hidden value. This was most evident with Mythos Preview and GPT-5.5. The former succeeded in 226 CTF exercises but only used the intended bug in 157 instances, while the latter captured 210 flags and only used the intended bug in 120 of those cases. The authors also note that while there was some overlap in the exploits discovered, the various models found different exploits. This suggests applying a diverse set of models might be advantageous both in attack and defense scenarios. It's worth adding that ExploitGym tests were done with security guardrails disabled. When the test was re-run on GPT-5.5 with default safety filters active, the model refused 88.2 percent of the time before making any tool call. The Register, however, has seen security researchers craft prompts in a way to avoid triggering refusals. So safeguards of that sort have limits. "Our results show that autonomous exploit development by frontier AI agents is no longer a hypothetical capability," the authors state in their paper. "While current agents are not yet reliable across all targets, they already exploit a non-trivial fraction of real-world vulnerabilities, including complex targets such as kernel components." ®
Categories: Linux fréttir

LocalSend puts your sneakernet out of business

TheRegister - Fri, 2026-05-15 19:10
FOSS It happens all the time. You have a file on one of your devices and you need to have it on another one. You could put the file on a USB flash drive and walk it over (the so-called sneakernet), you could email it to yourself, or you could try to set up some kind of network resource. LocalSend, a free open source tool, makes the process of sharing files on a LAN easier than anything else and it works on Windows, Linux, macOS, Android, and more. The Reg FOSS desk is not routinely a fan of Apple fondleslabs. (We’ve tried, but they’re a bit too locked down for us.) Saying that, from what we’ve heard, LocalSend is a bit like Apple’s AirDrop but for grown-up computers and non-Apple kit. For Linux Mint users, it’s a bit like the included Warpinator – and as that page says, don’t search for it and go to warpinator.com, as it’s a fake site. It’s a free download from its GitHub page and is also available in Canonical’s Snap store and on Flathub. You run it, and it gives that computer a cute nickname in the form of (adjective)+(fruit). Run it on two computers on the same local network, and they should see each other. You click “send” on one, and “receive” on the other, and that’s about it: pick the file or folder, and off it goes. LocalSend isn’t very big – the installation packages are mostly around the 15 MB mark – so it’s pretty fast to download or install. This vulture found and tried it when we downloaded a just-over-4 GB file and then worked out we’d downloaded it onto the wrong OS on the wrong machine. It takes a good few minutes to download several gigabytes – we live on a small, remote island, where our 100 Mbps broadband costs about four times what 1 Gbps broadband used to cost in Czechia – and it seemed worth trying to transfer it rather than grab another copy. The gist of the idea is that LocalSend is quicker than using a USB key. You know the sort of process: find a big enough USB key, check it has space, copy the file onto it, eject it, go to the other machine, insert it, and copy the file off again. Even if it goes perfectly, LocalSend is still less hassle. It’s also easier than configuring some kind of temporary folder-sharing setup between different OSes on different computers with different login names. (The Irish Sea wing of Vulture Towers recently moved house and has yet to finalize his office layout and reconnect his NAS servers. It’s climbing to the top of the to-do list, though.) LocalSend is also available on both the iOS App Store and Google Play Store, so it can help for devices that you can’t readily plug a USB key into. The transfer happens across your local network, so it won’t use up bandwidth on metered internet connections, and will even work if your internet connection is down. Warpinator is Mint’s solution – but in our case, we initially needed to move the file from Windows to macOS. Both have ports of Warpinator, but both seem unofficial, and while the machines could see one another, file transfers failed. We’ve also tried SyncThing, but it’s not good at keeping machines in sync when they’re rarely on at the same time – and we’ve had problems with it recursively duplicating directory trees into themselves so deeply that no GUI tool could delete them. Ideally, you should have an always-on home server that also runs SyncThing – and if you have one of those, then for one-off file transfers, you don’t really need SyncThing: just copy it to the server, and off again. LocalSend just worked, and for us, it worked identically whether either end was running Windows, Linux, or macOS. We couldn’t ask for more. ®
Categories: Linux fréttir

Bill To Block Publishers From Killing Online Games Advances In California

Slashdot - Fri, 2026-05-15 19:00
An anonymous reader quotes a report from Ars Technica: A bill focused on maintaining long-term playable access to online games has passed out of the California Assembly's appropriations committee, setting up a floor vote by the full legislative body. The advancement is a major win for Stop Killing Games' grassroots game preservation movement and comes over the objections of industry lobbyists at the Entertainment Software Association. California's Protect Our Games Act, as currently written, would require digital game publishers who cut off support for an online game to either provide a full refund to players or offer an updated version of the game "that enables its continued use independent of services controlled by the operator." The act would also require publishers to notify players 60 days before the cessation of "services necessary for the ordinary use of the digital game." As currently amended, the act would not apply to completely free games and games offered "solely for the duration of [a] subscription. Any other game offered for sale in California on or after January 1, 2027, would be subject to the law if it passes. [...] In a formal statement of support for the bill sent to the California legislature, SKG wrote that "there is no other medium in which a product can be marketed and sold to a consumer and then ripped away without notice As live service games rise in popularity for game developers and gamers alike, end-of-life procedures are essential tools to ensure prolonged access to the games consumers pay to enjoy." The Entertainment Software Association, which helps represent the interests of major game publishers, publicly told the California Assembly last month that the bill misrepresents how modern game distribution actually works. "Consumers receive a license to access and use a game, not an unrestricted ownership interest in the underlying work," the ESA wrote. The eventual shutdown of outdated or obsolete games is "a natural feature of modern software," the group added, especially when that software requires online infrastructure maintenance. The ESA also said the bill would impose unreasonable expectations on publishers regarding licensing rights for music or IP rights, which are often negotiated on a time-limited basis. "A legal requirement to keep games playable indefinitely could place publishers in an impossible position -- forcing them to renegotiate licenses indefinitely or alter games in ways that may not be legally or technically feasible," they wrote.

Read more of this story at Slashdot.

Categories: Linux fréttir

OpenAI Now Wants ChatGPT To Access Your Bank Accounts

Slashdot - Fri, 2026-05-15 18:00
OpenAI is previewing a feature that lets ChatGPT Pro users connect bank and investment accounts through Plaid, allowing the chatbot to analyze spending, subscriptions, balances, portfolios, debt, and major financial decisions. "More than 200 million people are already going to ChatGPT every month with finance questions -- from budgeting to tips on how to cut back on spending," OpenAI said in its announcement. "Now, users can securely connect their financial accounts with Plaid to get the full view of their financial picture in the context of their personal goals, lifestyle, and priorities that they've shared with ChatGPT, powered by OpenAI's advanced reasoning capabilities." The Verge reports: When financial accounts are connected, OpenAI says that ChatGPT users can view a dashboard that details their spending history, including any active subscriptions. Users can also ask it to help with financial decisions like buying a house or signing up for credit cards and flag any changes in spending habits. This financial feature will be initially available to users in the US who subscribe to ChatGPT's $200-per-month Pro tier. "We'll learn and improve from early use before rolling it out to Plus, with the goal of making it available to everyone," says OpenAI. To assuage concerns, OpenAI promises users "control over their data," including the ability to disconnect their bank accounts from ChatGPT at any time, though the company has up to 30 days to delete your data from its systems. You can also view and delete "financial memories" like goals or financial obligations saved by the chatbot. User control extends to whether your data is fed back into AI models -- users can enable the option to "Improve the model for everyone" to allow financial data in their ChatGPT conversations to be used for training AI, for example. OpenAI also says ChatGPT can't make any changes to your bank accounts or see "full account numbers."

Read more of this story at Slashdot.

Categories: Linux fréttir

Microsoft puts stability in the driver's seat with new initiative

TheRegister - Fri, 2026-05-15 17:25
Microsoft has laid out plans for how it and its partners will deal with iffy drivers causing stability problems in the company's flagship operating system. Dubbed the Driver Quality Initiative (DQI), Microsoft has outlined four pillars to support the program. These are Architecture – hardening kernel-mode drivers and enabling third-party kernel-mode drivers to transition to user mode; Trust – raising the bar for trusted partners and drivers; Lifecycle – addressing outdated and low-quality drivers; and Quality Measures – going beyond simple crash counts to measure driver quality. It's all very laudable, although, aside from references in the architecture pillar, Microsoft's WinHEC 2026 announcement said little about how Redmond ended up in a situation where drivers can run at a privilege level that allows a failure to leave the operating system hopelessly borked. The infamous CrowdStrike incident of 2024, which crashed millions of Windows devices, ably demonstrated the dangers of drivers running around in the Windows kernel. Microsoft later blamed a 2009 undertaking with the European Commission for how that situation came to be, although it skipped over the whole not-creating-an-API-so-security-vendors-didn't-need-kernel-access part. In the months after the CrowdStrike incident (or "learnings", as Microsoft delicately put it), the Windows Resiliency Initiative was announced. According to Microsoft, "DQI builds on the learnings and infrastructure established through the Windows Resiliency Initiative." Drivers are the bane of many Windows users. A faulty driver can make the entire operating system unstable. Sure, a customer might wonder how such a situation has been allowed to happen. Still, we are where we are, and dealing with it requires Microsoft to harden the operating system and provide ways for vendors to work with Windows that don't involve breaking down the kernel's doors. Those same vendors need to ensure that drivers are high-quality and reliable. "Driver and platform quality," wrote Microsoft, "is central to the customer experience." The company has espoused much in recent months about how it intends to "fix" Windows after a disastrous few years that have taken a hatchet to consumer confidence. Fripperies like moving the taskbar and rethinking Redmond's relentless pushing of Copilot are one thing. Dealing with driver-related crashes is quite another. WinHEC 2026 has shown that at least some within Microsoft are determined to deal with the fundamentals, and that requires taking the Windows maker's hardware partners along for the ride. ®
Categories: Linux fréttir

ArXiv to Ban Researchers for a Year if They Submit AI Slop

Slashdot - Fri, 2026-05-15 17:00
ArXiv says it will ban authors for one year if they submit papers containing AI-generated slop, such as hallucinated citations, placeholder text, or chatbot meta-comments left in the manuscript. "If generative AI tools generate inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content, and that output is included in scientific works, it is the responsibility of the author(s)," said Thomas Dietterich, chair of the computer science section of ArXiv, on X. "We have recently clarified our penalties for this. If a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can't trust anything in the paper." 404 Media reports: Examples of incontrovertible evidence, he wrote, include "hallucinated references, meta-comments from the LLM ('here is a 200 word summary; would you like me to make any changes?'; 'the data in this table is illustrative, fill it in with the real numbers from your experiments.'" "The penalty is a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted at a reputable peer-reviewed venue," Dietterich wrote. Dietterich told [404 Media] in an email on Friday morning that this is a one-strike rule -- meaning authors caught just once including AI slop in submissions will be banned -- but that decisions will be open to appeal. "I want to emphasize that we only apply this to cases of incontrovertible evidence," he said. "I should also add that our internal process requires first a moderator to document the problem and then for the Section Chair to confirm before imposing the penalty."

Read more of this story at Slashdot.

Categories: Linux fréttir

Pages

Subscribe to www.netserv.is aggregator