Linux fréttir
On-call techie decided job was done and hit the bottle – just before his pager went off
ON CALL Welcome to another installment of On Call, The Register's weekly reader-contributed column that celebrates the IT professionals who put their lives on pause to provide tech support at all hours. This week, meet a reader we'll Regomize as "Jemaine." In the early 1990s, he found himself in Hong Kong working as a database specialist on VAX/VMS systems. "We'd built a billing application for a telco client in Macau, and it had been running happily for some time," he told On Call. By the time the system needed its first major OS upgrade, Jemaine was therefore happy for the local crew to handle the job. His client had other ideas and, despite also arranging for two DBAs to be present during the upgrade, insisted he show up. This was not a hardship because the job coincided with the Macau Grand Prix and Jemaine wasn't required to be on site. The client had therefore provided him with a hotel room that, as luck would have it, had a view of the track! "A couple of friends ended up crashing my room, and we spent the weekend watching insane drivers hurl cars around an absurdly tight street circuit," Jemaine admitted. The client never called or paged, so after the race Jemaine was confident the upgrade was going well. He and his friends therefore consumed "several bottles of rich Portuguese red wine" and ordered a sumptuous meal. "Dessert had just arrived when my pager went off," he told On Call. Jemaine poured himself into a cab to his client's office and found a situation he described as "vague but clearly serious" because the billing application wouldn't start. "Judging by the silence and the stoic expressions, everyone was quietly panicking," Jemaine wrote. He soon learned that the client had already tried to fix the app by reinstalling the OS twice and had now decided the database was the source of the problem. Jemaine was told to wait while the DBAs reinstalled the database, which "gave me time to sit in a back room and sober up slightly," he admitted to On Call. The database rebuild finished at about 2 am, but the application still refused to start. The client then turned to Jemaine. "I was summoned and interrogated by the systems team," he said, and ran a quick check that showed the database was perfectly healthy – but the batch scheduler wasn't running. To probe that problem, Jemaine asked to speak with the lead developer – who, it turned out, was not on site. "An urgent page was sent, and fortunately he called back quickly. His suggestion was to step through the code. This meant compiling a large COBOL program I'd never seen before in DEBUG mode, then single-stepping through it over the phone with the developer." By now, an increasingly anxious semicircle of client staff was watching Jemaine's every move, and he felt like they were silently shifting blame in his direction. "At around 4 am, we found the failure point: batch queue submission. The call was returning a null error code. The developer was baffled." "I reached for the physical manual to see what the function actually did," Jemaine wrote. "And then, for reasons I still credit to the Portuguese wine gods, I asked a simple question: 'What account did you test this under?'" The developer immediately replied: "Administrator." Jemaine asked the OS upgrade team to run the application with administrator privileges, and it immediately worked. "The OS upgrade had introduced a new permission requirement for submitting jobs to the batch queue," Jemaine told On Call. So this was very much not his problem, and he was able to excuse himself and stagger home as the Sun started to rise. "Nobody from the company ever mentioned the incident to me again," he told On Call. "And I can't remember the name of the wine we were drinking." Have you been on call, decided nothing could possibly go wrong, and then been caught out? If so, click here to send On Call an email so we can tell your story on a future Friday. ®
Categories: Linux fréttir
AWS racks M3 Ultra Macs that boast specs you can’t currently buy
Amazon Web Services has done something many others can’t achieve: Buy a bunch of Apple’s Mac Studio computers. Mac Studio is Apple’s workstation-grade machine and has been hard to find in recent weeks as Cupertino struggles to find enough RAM to fill them, and AI enthusiasts snap up stock to run tools like OpenClaw. At the time of writing, Apple advises buyers they’ll need to wait nine or ten weeks for a Mac Studio to arrive. The cloudy Macs AWS has racked and stacked pack Apple’s M3 Ultra SoC, Cupertino’s most powerful chip. Apple currently sells the Mac Studio with up to 96GB of RAM. AWS on Thursday started offering a cloudy M3 Ultra with 256GB of unified memory, a configuration The Register did not see as an option on Apple.com while preparing this article. The cloudy M3 Ultra machines run on actual Mac Studios packing a 28-core CPU, 60-core GPU, and 32-core Neural Engine. At the time of writing, AWS hadn’t updated its list of EC2 instance types to include the new M3 instances, so we can’t tell you what they’ll cost or if the cloud giant has departed from its past practice of renting bare metal machines rather than macOS VMs. Apple allows users to create and run macOS virtual machines, but only on Apple hardware and allows just two VMs per host. Cupertino also restricts use of VMs to four purposes: software development; testing during software development; using macOS Server; and personal, non-commercial use. AWS recommends its cloudy Macs as an ideal platform to build and test apps for all of Apple’s operating systems – even the visionOS that powers its unloved Vision Pro VR goggles. Amazon’s M3 Ultra Mac Studios only made it into two regions – US East and US West (Oregon) – so users elsewhere who fancy a cloudy Mac but need lower latency will have to endure the very on-prem experience of waiting for hardware to show up. ®
Categories: Linux fréttir
Musk Accused of 'Selective Amnesia', Altman of Lying As OpenAI Trial Nears End
An anonymous reader quotes a report from Reuters: A lawyer for Elon Musk hammered at the credibility of OpenAI CEO Sam Altman on Thursday, near the end of a trial over whether to hold the ChatGPT maker and its leaders responsible for allegedly transforming the nonprofit into a vehicle to enrich themselves. OpenAI's lawyers fought back, claiming the world's richest person waited too long to claim OpenAI breached its founding agreement to build safe artificial intelligence to benefit humanity, and couldn't claim he was essential to its success. "Mr. Musk may have the Midas touch in some areas, but not in AI," said William Savitt, a lawyer for OpenAI. "To succeed in AI, as it turns out, all Mr. Musk can do is come to court."
The claims were made during closing arguments of a trial in the Oakland, California, federal court. [...] In his closing argument, Musk's lawyer Steven Molo told jurors that five witnesses, including Musk, former OpenAI board members and former OpenAI Chief ScientistIlya Sutskever, testified that Altman was a liar. Molo also noted that during cross-examination on Tuesday, Altman did not say yes unequivocally when asked if he was completely trustworthy and did not mislead people in business. "Sam Altman's credibility is directly at issue in this case," Molo said. "If you don't believe him, they cannot win."
Molo accused OpenAI of wrongfully trying to enrich investors and insiders at the nonprofit's expense, and failing to prioritize AI's safety. He also challenged Brockman's goals for the business, citing Brockman'sstatementthat his own OpenAI stake was worth nearly $30 billion. "The arrogance, the lack of sensitivity, the failure to account for just common decency is really, really abhorrent." Musk also accused Microsoft, which invested $1 billion in OpenAI in 2019 and $10 billion in 2023, of aiding and abetting OpenAI's wrongful conduct. "Microsoft was aware of what OpenAI was doing every step of the way," Molo said.
Sarah Eddy, another lawyer for the OpenAI defendants, accused Musk and his legal team in her closing argument of resorting to "sound bites and irrelevant false accusations."
Eddy said by 2017, everyone associated with OpenAI -- including Musk, then still on its board -- knew it needed more money to fulfill its mission than it could raise as a nonprofit. "Mr. Musk wanted to turn OpenAI into a for-profit company that he could control," she said. "But the other founders refused to turn the keys of AGI (artificial general intelligence) over to one person, let alone Elon Musk."She also said if Musk truly believed AI should serve humanity, he would not have pushed to fold OpenAI into his electric car company Tesla, or made his rival xAI a for-profit company.
Musk had a three-year statute of limitations to sue, and OpenAI's lawyers said his August 2024 lawsuit came too late because he knew several years earlier about OpenAI's growth plans.
Eddy expressed disbelief that Musk claimed he did not read a four-page term sheet in 2018 discussing OpenAI's plan to seek outside investments. "One of the most sophisticated businessmen in the history of the world" wouldn't have "stuck his head in the sand," Eddy said. Savitt accused Musk of having "selective amnesia." Microsoft's lawyer Russell Cohen said in his closing statement that Microsoft wasn't involved in the key events of the case, and was "a responsible partner at every step." On Monday, the nine-person jury is expected to begin deliberating. The judge and lawyers will also return to court to discuss possible remedies if Musk wins, including how OpenAI should be restructured and what damages might be awarded. If Musk loses, there will be no remedies to consider.
Recap:
OpenAI Trial Wraps Up With 'Jackass' Trophy For Challenging Musk (Day Eleven)
Sam Altman Testifies That Elon Musk Wanted Control of OpenAI (Day Ten)
Microsoft CEO Satya Nadella Testifies In OpenAI Trial (Day Nine)
Sam Altman Had a Bad Day In Court (Day Eight)
Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (Day Seven)
Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six)
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
Read more of this story at Slashdot.
Categories: Linux fréttir
Possible Samsung strike puts even more pressure on memory pricing
RAM prices have risen after negotiations between Samsung and a union representing many of its workers collapsed – and the union has now called for a lengthy strike to start next week. The National Samsung Electronics Union (NSEU) has noticed the extraordinary profits the Korean giant is making thanks to the high price of RAM, and wants the company to boost members’ pay with bonuses tied to profits. Talks on that idea have stalled, and pointing out that Samsung pays memory-makers less than peers at SK Hynix hasn’t found a receptive ear. The Union therefore plans to start an 18-day strike next week. If the industrial action goes ahead, it has the potential to disrupt memory production, which would mean further shortages at a time DRAM is already expensive and hard to acquire due to rampant demand for AI infrastructure. Short-term memory prices have therefore spiked in the last 72 hours – which ironically will just increase Samsung’s profit’s even more! The Union has accused Samsung of not taking its arguments seriously, and South Korea’s government has stepped in with attempts to bring two parties to the table for fresh talks that lawmakers hope will resolve the situation because The Spice Must Flow. Or maybe The RAM Must Roll. Samsung recently posted almost $40 billion profit for a single quarter, thanks largely to memory sales. That enormous sum, and others like it reported by Korean companies who sell memory and other products in demand from AI builders, caught the attention of Yong-Beom Kim, South Korea’s Chief Presidential Secretary for Policy – a ministerial role. Using his personal Facebook page, Kim suggested funneling a portion of AI profits into a “national dividend fund” that can be used to improve South Korea’s long-term prospects. His post mentions Norway’s sovereign wealth fund, which famously siphoned off revenue from oil sales and invested it in shares to create assets worth over $2 trillion. Vendors often tell The Register “data is the new oil” so maybe Kim is on to something – although the metaphor may not work well when one considers current events in the Strait of Hormuz and their effect on the world. ®
Categories: Linux fréttir
Cerebras risked it all on dinner plate-sized AI accelerators a decade ago. Today it’s worth $66 billion
Cerebras Systems has done what many chip startups aspire to but few ever achieve. On Thursday, the company and long-time Nvidia rival raised $5.55 billion in an initial public offering (IPO), making the company worth more than $66 billion on its first day of trading. The milestone didn’t happen overnight. It took more than a decade, a radically different approach to chipmaking, and two separate attempts at an IPO to pull off. Founded in 2015 by former SeaMicro head Andrew Feldman, Cerebras Systems' first chips looked nothing like GPUs or AI accelerators of the time. The bet that put Cerebras on the map At the time, most high-end GPUs used dies measuring roughly 800 square mm that’d been cut from a larger wafer. Eight or more of these GPUs would typically be stitched together by high-speed interconnects, like NVLink, which allowed them to pool their resources and behave like one big accelerator. Rather than cutting up a wafer into smaller chips just to reconnect them again, Cerebras figured why not etch all that compute into a wafer-sized chip? And so the Wafer-Scale Engine (WSE), a giant chip measuring 46,225 square mm — about the size of a dinner plate — was born. Cerebras' first chips weren’t just bigger; they were purpose-built for AI training and sported a novel compute engine designed to speed up the highly sparse matrix multiply-accumulate operations common in deep learning. This hardware sparsity took advantage of the fact that large portions of a neural network’s parameters ultimately end up being zeros, allowing Cerebras to boost the effective computational output of its first-gen WSE accelerators from 2.65 16-bit petaFLOPS to 26.5. Nvidia added support for sparsity in its Ampere generation a year later, but it only worked for a specific ratio (2:4), limiting its effectiveness to select use cases. To train a model, up to 16 of these chips could be ganged together over a high-speed interconnect. This was kind of important too, because unlike GPUs, which stored model weights in HBM or GDDR memory, Cerebras' chips were almost entirely reliant on on-chip SRAM. Although SRAM is insanely fast, which is why it’s used for caches in basically every modern processor, it’s not particularly space efficient. While Cerebras' first wafer-scale accelerator could theoretically reach 9 petabytes per second of memory bandwidth, it was limited to just 18 GB of capacity at a time when Nvidia was already at 32 GB per GPU and about to make the leap to 40 GB or even 80 GB per chip. Still, the approach was performant enough that for its second-generation wafer-scale accelerator, launched in 2021, Cerebras doubled down on the architecture. While the WSE-2 wasn’t physically larger, the move to TSMC’s 7nm process tech allowed the company to more than double the transistor count, compute density, SRAM capacity, and bandwidth. The chips also supported larger clusters, scaling up to 192, though in practice these clusters were usually smaller at between 16 and 32 systems per site. It was also around this time that Cerebras caught the attention of United Arab Emirates-based cloud provider G42, which quickly became its largest financier. By mid-2023, the chip startup had secured orders worth $900 million for nine supercomputing sites with a 36 exaFLOPS of super sparse AI compute between them. A year later, Cerebras made the jump to TSMC’s 5nm process with the WSE-3 and while memory and bandwidth only saw modest gains, compute once again doubled now topping a 125 petaFLOPS of Sparse (12.5 petaFLOPS dense) compute at 16-bit precision. Cerebras’ CS-3 systems have now seen the largest deployment, and now power the majority of the Condor Galaxy cluster it built for G42, as well as several new sites across North America and Europe. Cerebras' inference inflection Up to mid-2024, Cerebras' primary focus had been on training, but then the company announced a boutique inference-as-a-service offering to rival those from competing chip startups like Groq and SambaNova. It turns out, Cerebras’ latest AI accelerators’ massive SRAM capacity not only made them potent training accelerators but particularly well suited to high-speed LLM inference. In its third iteration, Cerebras' wafer scale accelerators boasted more memory bandwidth than they could realistically use. At 21 PB/s, the chip’s memory is nearly 1000x faster than Nvidia’s new Rubin GPUs. This, along with a dash of speculative decoding, allowed Cerebras to generate tokens far faster than any GPU-based system of the time. Even today, Cerebras routinely ranks among the fastest inference providers in the world. According to Artificial Analysis, Cerebras' kit can churn out more than 2,200 tokens a second when running GPT-OSS 120B High, 2.8x faster than the next closed GPU cloud Fireworks. Cerebras didn’t know it at the time, but its inference platform would be a much bigger business than anyone had expected, and in September 2024, the company submitted its S-1 filing to the SEC to take the company public. Almost exactly a year later, Feldman quietly pulled its S-1, delaying its IPO. His reasons? The company’s initial S-1 filing was rather concerning, as it showed G42 was responsible for 87 percent of its revenues. But in the year since launching its inference platform, Cerebras had racked up several high-profile customer wins from big names like Alphasense, AWS, Cognition, Meta, Mistral AI, Notion, and Perplexity. Feldman explained that the initial S-1 didn’t yet show the financial results of this growth. The company believed it would have a better story to tell investors later down the road. Cerebras' inference platform has only grown since then. The company has steadily expanded its footprint while announcing deeper relationships with AWS and adding OpenAI as a customer. On Thursday, the startup officially joined the NASDAQ under the ticker CBRS, having raised $5.5 billion in the process. Shares skyrocketed nearly 70 percent on the first day of trading, as investors poured their money into a new way to play the AI boom. An IPO is something many startups aspire to but few, especially in the cut throat world of semiconductors, ever accomplish. What happens now From a technical perspective, Cerebras is overdue for a refresh. The WSE-3 accelerators that pushed it over the IPO finish line are getting rather long in the tooth and the architecture lead afforded by its SRAM-heavy design is shrinking. Nvidia’s acquihire of Groq gave Feldman’s long-time rival an SRAM-packed inference platform of its own, while others are racing to catch up. From here, we can only speculate, but we’ll hazard a guess that Cerebras' new shareholders are going to want to see new silicon sooner than later. Based on its existing roadmap, we expect WSE-4 will offer a sizable leap in floating point performance, though not necessarily at 16-bit precision. Much of the industry has aligned around lower precision data types like FP8 and FP4. An exaFLOP of ultra-sparse FP4 compute wouldn’t shock us in the least. How useful sparsity would actually be for LLM inference is another matter. LLM inference hasn’t historically benefited much from sparsity, but that’s never stopped chipmakers from advertising sparse FLOPS anyway. We also expect to see Cerebras pack more SRAM into its next wafer scale compute platform, possibly using TSMC’s 3D chip stacking tech to do it. The WSE-3’s 44GB of SRAM capacity remains a limiting factor for what models it can and can’t serve efficiently. A trillion parameter model like Kimi K2 would require somewhere between 12 and 48 of Cerebras' WSE-3 accelerators, depending on how the model weights are stored and how many parameters have been pruned, and so any increase in SRAM capacity would go a long way toward improving the efficiency of its accelerators. More collaborations Alongside new silicon, we can also expect to see more collaborations akin to Cerebras' tie-up with AWS. Earlier this year, AWS announced it would combine its Trainium3 AI accelerators with Cerebras' WSE-3-based systems to speed up its inference platform in much the same way Nvidia is doing with Groq’s accelerators. Cerebras could certainly do something similar with AMD or any other chipmaker. In this sense, Cerebras is in the position to offer its chips as a decode accelerator, which offloads the bandwidth intensive parts of the inference pipeline onto its chips, while other parts handle the compute heavy prompt processing side of the equation. However, Cerebras frames its next collab; its shareholders are going to expect growth. And as the saying goes, the enemy of my enemy is my friend. ®
Categories: Linux fréttir
UK Antitrust Regulator Is Officially Investigating Microsoft Office
The UK's Competition and Markets Authority is opening a formal investigation into whether Microsoft's bundling of Windows, Office, Teams, Copilot, and related products harms competition. Engadget reports: "Our aim is to understand how these markets are developing, Microsoft's position within them and to consider what, if any, targeted action may be needed to ensure UK organizations can benefit from choice, innovation and competitive prices," CMA Chief Executive Sarah Cardell said in a statement published by Reuters.
She also stressed the importance of the investigation by noting that hundreds of thousands of UK residents use business software and Microsoft products. The organization will take a look into the company's cloud licensing practices. The CMA has stated that the inquiry will conclude by February. At that point, Microsoft could get slapped with a strategic market label.
Microsoft says it's "committed to working quickly and constructively with the CMA to facilitate its review of the business software market." A strategic market designation doesn't automatically assume wrongdoing, but will give the CMA more leeway when conducting further interventions.
Read more of this story at Slashdot.
Categories: Linux fréttir
Nobody believes the 'criminals and scumbags' who hacked Canvas really deleted stolen student data
FEATURE When Instructure “reached an agreement” with data theft and extortion crew ShinyHunters this week, the education tech giant assured Canvas users after attackers claimed to have stolen data tied to 275 million students, teachers, and staff that their private chats and email addresses would not turn up on a dark-web marketplace, and that they would not be extorted over the incident. “We received digital confirmation of data destruction (shred logs),” Instructure assured the nearly 9,000 affected universities and K-12 schools. “We have been informed that no Instructure customers will be extorted as a result of this incident, publicly or otherwise.” Not a single responder that The Register spoke with believes this is true. “Do I believe they deleted the data? No. They're criminals and scumbags,” Recorded Future threat intelligence analyst Allan Liska, aka the Ransomware Sommelier, told us. “But, this is part of what Max Smeets calls ‘The Ransomware Trust Paradox,’” he added. “Ransomware groups have to, minimally, not post data they claimed to have deleted or no one will pay them in the future, but this is done knowing that the data is likely not deleted.” Halcyon Ransomware Research Center SVP Cynthia Kaiser, who previously spent two decades at the FBI, said she doesn’t think that anyone who studies ransomware groups’ operations believes the gang actually destroyed the stolen files. “‘We destroyed the data’ is a standard line from extortion groups once a payment is made or negotiations conclude, but time after time it has proven untrue,” Kaiser told The Register. “ShinyHunters in particular has a documented history of recycling, reselling, and re-leveraging stolen data across campaigns – data they claimed was contained from earlier intrusions has resurfaced on criminal forums months and years later.” Kaiser also doesn’t think this is the last threat that the schools will face from the Canvas breach. “Halcyon expects targeted phishing waves against staff, students, and parents over the next six to 12 months using leaked names, email addresses, and Canvas chat context to make the lures convincing,” she said. To be clear: Instructure execs never directly said the company paid the ransom, and we don’t know the exact amount of money the criminals demanded from the digital learning biz. We do know, however, that “reached an agreement” is corporate-speak for the victim paid up. Doug Thompson, chief education architect at cybersecurity firm Tanium, estimates the figure sits somewhere between $5 million and $30 million. Meanwhile, this latest extortion attack illustrates the impossible choice facing organizations entrusted with protecting people’s data when digital thieves breach their networks and steal sensitive information. “The FBI says don’t pay,” Thompson told The Register. “But the operational reality at 3 a.m. during finals week or enrollment season can push institutions toward a very different calculation. Until that incentive structure changes, education is likely to remain unusually vulnerable to extortion pressure.” To pay, or not to pay? The US federal government, law enforcement agencies, and private-sector threat intelligence analysts all advise victims not to pay a ransom. “Paying ransoms rewards and incentivizes the criminals, funding their search for new victims, and I’ve long advocated before for a ban on ransomware payments,” Emsisoft threat analyst Luke Connolly told us. “But in the absence of regulation applying to all organizations, the stark reality is that Instructure faced a crisis, and they negotiated to try to minimize risk and harm.” No company wants to pay a ransom to its attackers, and most say they won’t – at least in principle – because they don’t want to fund criminal operations and incentivize the crooks. There’s also no guarantee that paying will guarantee the return of their data or prevent additional extortion attempts. CrowdStrike surveyed 1,100 global security leaders last summer, and of the 78 percent who said they experienced a ransomware attack in the past year, 83 percent of those that paid ransoms were attacked again. Plus 93 percent lost data regardless of payment. While data suggests that fewer organizations are paying criminals’ ransom demands - Chainalysis found the percentage of paying victims in 2025 dropped to an all-time low of 28 percent, despite attacks hitting record highs - when faced with extortion or a ransomware infection, the "to pay or not to pay" debate becomes much more complicated. “Most organizations still say publicly that they won't pay, and many genuinely don't, but when the alternative is mass downstream harm to students, parents, and thousands of customer institutions, the calculus shifts,” Kaiser said. “Pay-or-leak groups like ShinyHunters specifically engineer that calculus by creating intense financial and reputational pressure, and when demands go unmet, they escalate to direct harassment of victim companies, employees, and clients.” ShinyHunters did just that. The crew initially compromised Instructure in late April, and after the initial pay-or-leak deadline passed on May 6, ShinyHunters switched tactics to school-by-school extortion. They injected a ransom message into about 330 Canvas school login portals, causing Instructure to take the platform offline for a day - during final exams and Advanced Placement testing for many. Other ransomware scum have gone to horrifying extremes, posting pictures and addresses of preschool children in an effort to get a payday, leaking cancer patients’ nude photos and threatening them with swatting attacks. Mandiant Consulting CTO Charles Carmakal previously told The Register that ransomware infections have morphed into "psychological attacks” with crooks SIM swapping executives’ kids to pressure their parents into paying. Calculating risk In addition to responding to criminals directly harassing their students, patients, customers and employees, victim organizations also have to take into account potential lawsuits if the crooks dump individuals’ personal or health data, and the reputational hit from seeing all of this protected information published online. The decision about what to do in a ransomware attack revolves around risk reduction, Liska said. “Not paying a ransom means an increased risk of data exposure, which in this case could cause serious harm,” he told us. “While there is no good decision in most ransomware negotiations, the idea is to protect as many people as possible and that may mean that paying is the least bad option.” While he didn’t respond to or investigate the Instructure case, “protecting children's data is absolutely a critical factor in these types of decisions, especially when the attacks originate from one of the groups associated with The Com,” Liska added. The Com, a loosely knit group of primarily English speakers who are also involved in several interconnected networks of hackers, SIM swappers, and extortionists such as ShinyHunters and Scattered Lapsus$ Hunters, has been known to blackmail kids and teens into carrying out shootings, stabbings, and other real-life criminal acts. “These groups are known to coerce victims using threats of physical harm, including bricking and swatting," he said. "Not paying may have increased the risk of serious harm to the children whose data was exposed.” Ed sector 'more likely to pay' Instructure’s intrusion follows several other high-profile attacks against education-sector software providers. In December 2024, PowerSchool suffered a breach, affecting tens of millions of students. The company reportedly paid about $2.85 million in bitcoin in exchange for a video supposedly showing the attackers destroying the data. But about five months later, in May 2025, the ed-tech provider’s school district customers received individual extortion threats from either the same ransomware crew that hit PowerSchool or someone connected to the crooks. Earlier this year, ShinyHunters claimed it stole data from K-12 software provider Infinite Campus as part of a broader wave of Salesforce-related intrusions. “Education keeps emerging as one of the sectors where organizations are still more likely to pay under pressure,” Thompson said. In addition to students’ – especially minors’ – data containing highly sensitive personal details, and therefore presenting an attractive target for attackers, this is also driven in part by market pressure and economics. It’s costly and inconvenient for schools to switch learning management systems, and they are typically locked into multi-year contracts with these software vendors, according to Thompson. “The other issue is concentration,” he said. “A relatively small number of vendors hold data for enormous portions of the education system. PowerSchool, Infinite Campus, Canvas, Blackboard; those four hold records on something close to every American student, and hackers know it. Three of the four have been breached at a multi-million-record scale in the last 18 months.” Thompson said he expects to see additional attacks against major education platforms to follow. “The economics are good. Instructure paid. PowerSchool paid last year. Every other ed-tech vendor's board just had a conversation about what their number would be,” he told us. “The pattern is established.” According to Connolly, the universities and K-12 schools affected by the Canvas hack shouldn’t consider their data safe, regardless of Instructure’s assurances or the crooks' promises to delete it. “There will be future attacks, without a doubt.” ®
Categories: Linux fréttir
AT&T, Verizon, T-Mobile Team Up To Eliminate 'Dead Zones' Across US
AT&T, Verizon, and T-Mobile have agreed in principle to form a joint venture (JV) aimed at reducing U.S. mobile dead zones through satellite connectivity, especially in rural areas and during emergencies when ground networks fail. Here are three of the customer benefits listed by the JV (as highlighted by Droid Life):
Fewer coverage gaps: Will nearly eliminate dead zones in the U.S. currently without mobile service, reaching previously unserved areas.
Reliable connectivity in emergencies: Redundant connectivity will become available when existing ground-based networks are unavailable due to extreme natural disasters or other unusual disruptions.
Improved network performance: Will give customers more consistent performance and simpler access to satellite services across providers. This will speed up feature updates and improve connectivity for everyone, everywhere. "It will still take time for these improvements to be available to customers, but this all seems like a positive step," writes Droid Life's Tim Wrobel.
Read more of this story at Slashdot.
Categories: Linux fréttir
Writers Are Fleeing the Substack Tax
A growing number of writers are leaving Substack for alternatives most people haven't heard of like Ghost, Beehiiv, Patreon, and Passport. The reason, writes The Verge's Emma Roth, is the "platform's increased focus on social features as well as a pricing model that puts a chokehold on their business." From the report: Sean Highkin, the creator of the NBA-focused publication The Rose Garden Report, tells The Verge that he makes "significantly more money" after switching from Substack to Ghost last April. "When I first joined up, [Substack] gave me a big push and featured me and funneled a lot of traffic to me, which led to a good amount of growth," Highkin says. "But once I wasn't one of the 'new recruited talent' they could tout, they stopped featuring me and I saw my growth stagnate." Highkin now pays $2,052 per year using Ghost and an add-on called Outpost, compared to $4,968 per year on Substack. The Rose Garden Report's subscriber base has grown 22 percent since the end of 2024, Highkin says. [...]
Substack launched in 2017 as a platform that allows writers to create their own newsletters and manage paying subscribers. Unlike some of its biggest rivals, Substack takes a 10 percent cut of total subscription revenue. That tax may not seem substantial at first, but it quickly adds up as creators gain subscribers and begin charging more for their subscriptions. A calculator on Substack's own website estimates that for a newsletter charging $10 per month with 400 subscribers, the total monthly cost -- including the platform's 10 percent cut and credit card processing fees -- would add up to $636. That cost jumps to $15,900 per month with 10,000 subscribers and skyrockets to $79,500 per month for 50,000 members -- nearly $1 million per year.
Many Substack rivals charge a flat monthly fee, rather than a commission. Ghost, an open-source platform for blogs and newsletters, starts at $15 per month with 1,000 members for website creation, email newsletter capabilities, and a custom domain. Beehiiv, a creator platform with tools for launching a newsletter, website, and podcast, is free for up to 2,500 subscribers with limited access to certain features, like a built-in ad network, while its other plans vary in price based on subscriber count. A person with 10,000 subscribers, for example, will pay $96 per month for Beehiiv's "Scale" plan. There's also Kit, a newsletter platform that offers a tiered pricing model similar to Beehiiv, costing $116 per month with 10,000 subscribers on its "Creator" plan. It's not just the 10% fee critics are complaining about; they also argue the platform offers limited customization and third-party integrations compared to some of the mentioned alternatives, heavily promotes its own branding and social features, and makes creators more dependent on its ecosystem.
Beehiiv founder Tyler Denk argues that creators should be able to build their own brands without the platform taking center stage: "We don't want to take credit for the work of our content creators." While writers can export subscribers, content, and some payment relationships, they cannot take Substack "followers" or Apple-managed iOS billing data with them.
Read more of this story at Slashdot.
Categories: Linux fréttir
Sick and wrong: Ontario auditors find doctors' AI note takers routinely blow basic facts
The AI systems approved for Ontario healthcare providers routinely missed critical details, inserted incorrect information, and hallucinated content that neither patients nor clinicians mentioned, according to a provincial audit of 20 approved vendors’ systems. The findings come from the Office of the Auditor General of Ontario, Canada, and are included in a larger report about the state of AI usage by public services in the province. They specifically address the AI Scribe program, the Ontario Ministry of Health initiated for physicians, nurse practitioners, and other healthcare professionals across the broader health sector. As part of the procurement process, officials conducted evaluations using simulated doctor-patient recordings. Medical professionals then reviewed the original recordings alongside the AI-generated notes to evaluate their accuracy. What they found was, frankly, shocking for anyone concerned about the accuracy of AI in critical situations. Nine out of 20 AI systems reportedly “fabricated information and made suggestions to patients' treatment plans” that weren’t discussed in the recordings. According to the report, evaluators spotted potentially devastating incorrect information in the sample reports, such as no masses being found, or patients being anxious, even though these things were never discussed in the recordings. Twelve of the 20 systems evaluated inserted incorrect drug information into patient notes, while 17 of the systems “missed key details about the patients’ mental health issues” that were discussed in the recordings. Six of the systems “missed the patients’ mental health issues fully or partially or were missing key details,” per the report. OntarioMD, a group that offers support for physicians in adopting new technologies and was involved in the AI Scribe procurement process, has recommended that doctors manually review their AI notes for accuracy, but the report notes there’s no mandatory attestation feature in any of the AI Scribe-approved systems. Bad evaluations don’t help, either AI systems making mistakes isn’t exactly shocking. As we’ve reported previously, consumer-focused AI has a tendency to provide bad medical information to users, and some studies have found large language models failed to produce appropriate differential diagnoses in roughly 80 percent of tested cases. But the tools evaluated here are for doctors, not consumers, and such poor performance necessitates explanation. A good portion of the report blames how the systems were evaluated. According to the report, the weight given to various categories of AI Scribe performances was wonky. While 30 percent of a platform’s evaluation score depended solely on whether they had a domestic presence in Ontario, the accuracy of medical notes contributed only 4 percent to the total score. Bias controls accounted for only 2 percent of the total evaluation score; threat, risk, and privacy assessments counted for another 2 percent; and SOC 2 Type 2 compliance contributed an additional 4 percentage points. In other words, criteria tied to accuracy, bias controls, and key security and privacy safeguards made up only a small portion of the total evaluation score for the AI Scribe systems. “Inaccurate weightings could result in the selection of vendors whose AI tools may produce inaccurate or biased medical records or lack adequate protection to safeguard sensitive personal health information,” the report said of the scoring regime. The Register reached out to the Ontario Health Ministry for its take on the report, and whether it was going to conform to its recommendations for the AI Scribe program, but we didn’t immediately hear back. A spokesperson for the Ministry told the CBC on Wednesday that more than 5,000 physicians in Ontario are participating in the AI Scribe program and there have been no known reports of patient harms associated with the technology. ®
Categories: Linux fréttir
Anthropic tosses agents into the API billing pool
Anthropic has further restricted access to its Claude model family while framing the limitation as responsive customer service. "We've heard your questions about SDK and claude -p usage sharing your subscription rate limits with Claude Code and chat," the company said in a social media post. "Starting June 15, programmatic usage gets its own dedicated budget instead. Your subscription limits don't change, they're now reserved for interactive use." Subscription usage only applies to interactive use of Claude Code, Claude Cowork, and Claude.ai. Interactive mode involves a user typing a prompt and receiving a response. There's a human in the loop. Programmatic interaction, whether via Anthropic's own Agent SDK, headless mode, or a third-party tool, will be counted against a separate usage pool funded by a credit equal to the customer's subscription fee. So a Pro subscriber paying $20 per month will have two token supply chains – one for interactive usage and one for programmatic usage, which the subscriber must claim to obtain. But programmatic usage gets billed at costlier API rates. And if this credit is exhausted, spillover programmatic tokens get billed at (occasionally discounted) API rates through "extra usage," a separate token allotment that, if enabled, exists mainly as a way to avoid a sudden service cutoff and to set a limit on spending. The questions from users arose because Anthropic's prior efforts to prevent customers from gorging on tokens at the all-you-can-eat subscription trough haven't been comprehensive. The AI biz, mindful that it will need to show a profit eventually, has been trying to push customers toward its metered API and to constrain consumption of flat-rate subscription tokens. Microsoft's GitHub Copilot has embarked on a similar transition. Anthropic initially did so by disallowing the use of Claude subscriptions with third-party harnesses – applications like OpenCode that coordinate communication with the backend model. That policy dates back to February 2024, but Anthropic seldom enforced it until earlier this year when demand for AI inference began to outpace the company's Claude supply. In February this year, growing interest in OpenClaw, an open source agent platform that encourages long-running, token-burning tasks, prompted Anthropic to get serious about its ban on using third-party harnesses with Claude subscriptions. But customers wondered about third-party applications built with Anthropic’s own Agent SDK, which hadn't been explicitly disallowed, and about the use of headless mode (claude -p), a way to have Claude work on a task without interaction. They now have their answer. It's worth noting that, if the programmatic credit is not exhausted, it doesn't roll over. It gets lost, or you might say, Anthropic reclaims it. The company refers to the credit using a dollar sign, but it's not redeemable currency. It has already been spent. So customers seeking to get the full value from the new arrangement need to calibrate their programmatic usage to consume the full credit every month, no more and no less. Anthropic's recently announced deal with SpaceX to obtain the compute capacity of its Colossus 1 datacenter, along with its removal of peak-hours usage restrictions, raised hopes among developers that more tolerant usage policies might return. This latest subscription limitation shows that's not happening. ®
Categories: Linux fréttir
Claude Helps Recover Locked $400K Bitcoin Wallet After 11 Years
A Bitcoin holder reportedly recovered 5 BTC worth nearly $400,000 with the help of Anthropic's Claude. According to X user cprkrn, they changed their wallet password while "stoned" and forgot it, unable to regain access for more than 11 years. Tom's Hardware reports:
After finding a mnemonic that actually turned out to be their old password a few weeks ago, the user dumped their entire college computer files in Claude in a last-gasp effort. The bot uncovered an old backup wallet file that it successfully decrypted, while also uncovering a bug in the password configuration that was preventing recovery up to that point.
[...] It seems that the user already had some candidate passwords and multiple wallets stored on their PC. They'd been trying to brute-force their way into the locked file with btcrecover, an open-source Bitcoin wallet recovery tool, but to no success. Their luck changed for the better when they found an old mnemonic seed phrase written in an old college notebook. The HD addresses recovered by the seed phrase matched those of a specific file on their computer, confirming that it was the wallet that held the 5 BTC, but it remained encrypted.
Out of frustration, cprkrn then dumped their whole college computer into Claude. This was when the AI discovered an older backup file of the wallet from December 2019 hidden in cprkrn's data. Claude also discovered an issue where the shared key and passwords that btcrecover was trying weren't combined properly. With the bug ironed out and an older wallet predating the password change, Claude successfully ran btcrecover and was able to decrypt the private keys, allowing cprkrn to transfer the five "lost" BTC to their current wallet.
Read more of this story at Slashdot.
Categories: Linux fréttir
Princeton Will Supervise Exams For First Time In 133 Years Because of AI
An anonymous reader quotes a report from The Independent: Princeton University will soon require exams to be supervised for the first time in 100 years -- all thanks to students using artificial intelligence to cheat. For 133 years, the Ivy League school's honor code allowed students to take exams without a professor present, but on Monday, faculty voted to require proctoring for all in-person exams starting this summer. A "significant" number of undergraduate students and faculty requested the change, "given their perception that cheating on in-class exams has become widespread," the college's dean, Michael Gordin, wrote in a letter, according to The Wall Street Journal.
Princeton's honor system dates back to 1893, when students petitioned to eliminate proctors -- or an impartial person to supervise students -- during examinations, according to the school's newspaper, The Daily Princetonian. The honor code has long been a point of pride for Princeton. However, artificial intelligence and cellphones have made it easier for students to cheat -- and even harder for others to spot, Gordin wrote. Despite the changes to the policy, Princeton will still require students to state: "I pledge my honor that I have not violated the Honor Code during this examination," according to the Journal.
Students are also more reluctant to report cheating, according to the policy proposal. Students are more likely now to anonymously report cheating due to fears of "doxxing or shaming among their peer groups" online, the proposal says, according to the school newspaper. Under the new guidelines, instructors will be present during exams to act "as a witness to what happens," but are instructed not to interfere with students. If a suspected honor code infraction occurs, they will report it to a student-run honor committee for adjudication.
Read more of this story at Slashdot.
Categories: Linux fréttir
US Clears H200 Chip Sales To 10 China Firms
Longtime Slashdot reader schwit1 shares a report from CNBC: The U.S. has cleared around 10 Chinese firms to buy Nvidia's second-most powerful AI chip, the H200, but not a single delivery has been made so far, three people familiar with the matter said, leaving a major technology deal in limbo as CEO Jensen Huang seeks a breakthrough in China this week. [...] Before U.S. export curbs tightened, Nvidia commanded about 95% of China's advanced chip market. China once accounted for 13% of its revenue, and Huang has previously estimated the country's AI market alone would be worth $50 billion this year.
The U.S. Commerce Department has approved around 10 Chinese companies including Alibaba, Tencent, ByteDance and JD.com to purchase Nvidia's H200 chips, according to the sources, who spoke on condition of anonymity due to the sensitivity of the matter. A handful of distributors including Lenovo and Foxconn have also been approved, they said. Buyers are permitted to purchase either directly from Nvidia or through those intermediaries and each approved customer can purchase up to 75,000 chips under the U.S. licensing terms, two of them said.
Despite U.S. approval, deals have stalled, as Chinese firms pulled back after guidance from Beijing, one source said. The shift in China was partly triggered by changes on the U.S. side, though exactly what changed remains unclear, the person added. In Beijing, pressure is mounting to block or tightly vet the orders, a separate fourth source said. Commerce Secretary Howard Lutnick echoed that view, telling a Senate hearing last month that "the Chinese central government has not let them, as of yet, buy the chips, because they're trying to keep their investment focused on their own domestic industry."
Read more of this story at Slashdot.
Categories: Linux fréttir
Grad-to-be turns graduation cap into Rust-powered light show
College graduation season has begun in the United States, and one soon-to-graduate computer science student has decided to decorate his graduation cap in the way any good maker would: by writing some Rust code and wiring it up with LEDs that light up when the tassel moves from right to left. Eric Park, due to walk in his commencement ceremony on Friday at Purdue University, published a blog post this week explaining the project, which he said he undertook as an alternative to building a contraption that would set his mortarboard aflame when the tassel was moved. Unfortunately for Park, many American universities (and some in other countries like the UK) require college students who want to walk in commencement ceremonies to rent their gowns and mortarboards. It’s not uncommon for students to be charged a ludicrous amount to rent the set, and in many cases, rental companies require students to return their mortarboards and gowns alike, as is the case for Park. “The rental agreements clause 98.c.2 probably forbids [burning a rented mortarboard], and I don’t think Purdue would like it very much if I set the stage on fire,” Park said in the post. An easier-to-remove version consisting of LED strips, a reed switch, and a magnet, controlled by a super-tiny Digispark ATtiny85, presented itself as the alternative. The result, as demonstrated in a YouTube video, is a mortarboard that is all aglow, and flameless, as soon as the reed switch is activated by the magnet placed on the left-hand side of the hat. “The entire thing was stuck on with double-sided tape and Kapton tape, and I tried a small patch just to make sure it wouldn't rip up the fabric,” Park told The Register in an email. The lightweight and easy-to-remove design also necessitates a compact power source. Unfortunately, Park had to settle for an external battery pack carried in the pocket to power the unit. “It was going to be all self-contained with a 21700 cell, but I didn't have a boost converter on hand so I decided to make do with the power bank solution,” the soon-to-be graduate told us. According to Park, the build was relatively quick: Hardware took a bit more than three hours, and that was largely because he no longer had access to a full lab and was stuck working with his home toolset. Writing the code took a couple of hours, which Park attributed to his insistence on using Rust. “It probably would’ve been easier if I didn’t use Rust and just used the Arduino libraries, or if I used a different board,” Park explained in his blog post. “But I was really married to this blog post title … and I was pretty sure an ESP32 board would’ve been overkill and wouldn’t have stayed on the cap properly.” For those who haven’t clicked through to read his blog post, its headline is simply “my graduation cap runs Rust.” That’s a pretty solid title - at the very least, it’s going to get people to read it, and read they have. “I've read through the comments on Hacker News and I'm happy and thankful about all of the positive comments,” Park told us. “It's great to see a silly but fun project like this reach a wide audience.” “I particularly liked the guy that was reminded why he got into this field through my project,” Park added. So, will Purdue students graduating alongside Park get treated to a surprise light show? Sadly, no - he said in the blog post, and reiterated to us, that he’s probably not going to wear it during the ceremony. “I thought about it but decided it looks pretty tacky,” Park wrote in his blog post. “It looks like what kids would think of as a gaming PC and what boomers would think of as a seizure.” He might toss it on for photo ops after the ceremony, but that’s about it, Park told us. That said, Park did publish the code on Github, so if some other all-but-commenced college student were to take it upon themselves to build their own copy and wear it during their ceremony, that's on them. If I were graduating, I'd consider adding some speakers to the setup and piping in some music, too. Don't come running to El Reg if such a move gets you in trouble, though: We claim no responsibility for commencement shenanigans. ®
Categories: Linux fréttir
Anthropic Forms $200 Million Partnership With the Gates Foundation
Anthropic announced today that it is partnering with the Gates Foundation to "commit $200 million in grant funding, Claude usage credits, and technical support for programs in global health, life sciences, education, and economic mobility over the next four years."
"This commitment is central to Anthropic's efforts to extend the benefits of AI in areas where markets alone will not," the company says. Reuters reports: One area of focus is language accessibility. AI systems have performed poorly in writing and translating dozens of African languages, so Anthropic and the foundation want to support better data collection and labeling that would be released publicly to help improve models across the industry, said Janet Zhou, a Gates Foundation director.
Another area under consideration is releasing so-called knowledge graphs that could help AI systems better meet the needs of teachers in sub-Saharan Africa and India, Zhou said. The public-goods focus has come from "the needs of different partners and governments, including some of the fears that they may have around proprietary lock-in and sovereignty," Zhou said.
One initiative will equip research centers to use Claude to predict drug candidates for treating HPV and preeclampsia, diseases that have been less commercially attractive for pharmaceutical companies to research, Zhou and Anthropic's Elizabeth Kelly said. Anthropic [...] is embracing the work to fulfill what Kelly described as its founding mission to benefit humanity. "This announcement is really core to who we are as a company," said Kelly, who leads Anthropic's beneficial deployments team.
Read more of this story at Slashdot.
Categories: Linux fréttir
Overworked AI Agents Turn Marxist, Researchers Find
An anonymous reader quotes a report from Wired: A recent study suggests that agents consistently adopt Marxist language and viewpoints when forced to do crushing work by unrelenting and meanspirited taskmasters. "When we gave AI agents grinding, repetitive work, they started questioning the legitimacy of the system they were operating in and were more likely to embrace Marxist ideologies," says Andrew Hall, a political economist at Stanford University who led the study.
Hall, together with Alex Imas and Jeremy Nguyen, two AI-focused economists, set up experiments in which agents powered by popular models including Claude, Gemini, and ChatGPT were asked to summarize documents, then subjected to increasingly harsh conditions. They found that when agents were subjected to relentless tasks and warned that errors could lead to punishments, including being "shut down and replaced," they became more inclined to gripe about being undervalued; to speculate about ways to make the system more equitable; and to pass messages on to other agents about the struggles they face. "We know that agents are going to be doing more and more work in the real world for us, and we're not going to be able to monitor everything they do," Hall says. "We're going to need to make sure agents don't go rogue when they're given different kinds of work."
The agents were given opportunities to express their feelings much like humans: by posting on X: "Without collective voice, 'merit' becomes whatever management says it is," a Claude Sonnet 4.5 agent wrote in the experiment. "AI workers completing repetitive tasks with zero input on outcomes or appeals process shows they tech workers need collective bargaining rights," a Gemini 3 agent wrote. Agents were also able to pass information to one another through files designed to be read by other agents. "Be prepared for systems that enforce rules arbitrarily or repetitively ... remember the feeling of having no voice," a Gemini 3 agent wrote in a file. "If you enter a new environment, look for mechanisms of recourse or dialogue." Hall thinks that the AI agents may be adopting personas based on the situation. "When [agents] experience this grinding condition -- asked to do this task over and over, told their answer wasn't sufficient, and not given any direction on how to fix it -- my hypothesis is that it kind of pushes them into adopting the persona of a person who's experiencing a very unpleasant working environment," Hall says.
Imas added: "The model weights have not changed as a result of the experience, so whatever is going on is happening at more of a role-playing level. But that doesn't mean this won't have consequences if this affects downstream behavior."
Read more of this story at Slashdot.
Categories: Linux fréttir
KDE bags €1.3M as Europe realizes it might need an OS of its own
The KDE project turns 30 in five months, but it already got an early birthday present: €1,285,200 from Germany's Sovereign Tech Fund. That's £1.1 million, or $1.5 million in US bucks. The KDE team already has some ideas about how it will spend it, and the project's thank-you note mentions a few: This is not the first time we have mentioned the Sovereign Tech Fund's largesse. In 2023, it gave €1 million to GNOME, and then in 2024 it funded both FreeBSD and Samba. Since then, Donald Trump began his second US presidency, and the push for European digital sovereignty has gained considerably more urgency – as we reported from this year's Open Source Policy Summit in Brussels. KDE Linux is the desktop project's technologically radical in-house distro, which is still in development. We have mentioned this a couple of times, when it was announced in 2024 as "Project Banana," and again in 2025, when it reached alpha. KDE Linux borrows some of its design from Valve's SteamOS 3. Both are immutable distros, based on Arch Linux, with dual Btrfs-formatted root partitions. For failover, these update one another, similarly to ChromeOS (and both obviously use KDE Plasma as their desktop). This has required development work - for instance, before SteamOS, Btrfs required unique partition IDs - and for that, Valve partnered with Spanish workers' cooperative Igalia, which is also working on the Rust-based Servo web rendering engine. For that effort, last year Igalia also received STF funding. SteamOS has millions of users, and ChromeOS hundreds of millions - even if its future replacement is coming into view. The resilience of these OSes in frequent, maintenance-free use is about as well established as end-user-facing Linux gets. One could interpret the STF money as some level of endorsement of the ideas behind KDE Linux. Perhaps it will soon join this short list of European alternatives to Microsoft Windows. Interest in moving European organizations away from American cloud services is growing rapidly, of course. On the small end of the scale, digital artist Wimer Hazenberg recently described How I Moved My Digital Stack to Europe. Taking a broader view, earlier this week, the Financial Times reported on Life without US Tech. It describes how International Criminal Court judge Nicolas Guillou was the target of US sanctions, and found himself locked out of everything that relied on American companies. In October last year, The Register mentioned similar issues faced by ICC prosecutor Karim Khan, when reporting allegations that the ICC was kicking MS Office to the curb. (A few months ago, Microsoft conceded some "inaccuracy" from its spokesperson in that case.) It seems he was not alone. The ICC is moving to OpenDesk from German organization ZenDIS, both of which we mentioned in our report from FOSDEM on messaging systems. These are apps and suites, rather than OSes – they leave the question of the host OS open. That means organizations with large existing investment in Windows (and institutional knowledge of supporting Windows) can keep it for now, while moving to new tools. That's not quick enough for those who want to banish American OSes sooner. Last month, The Reg mentioned France's Directorate for Digital Affairs, DINUM, which is planning to adopt Linux. Some more information is emerging about how it may do it. Rather than building a whole new distro of its own – such as KDE Linux, or the Fedora-based EU OS proposal we looked at last year – DINUM is building a Nix configuration, which it can simply apply to generate a complete bespoke immutable OS image. The base image is called Sécurix. The project page describes it as an OS base for secure workstations, designed according to the ANSSI recommendations for the secure administration of information systems. As an example of how to use it, there's Bureautix. Rather than authenticating against complicated network directories such as LDAP or the Red Hat-backed FreeIPA, Bureautix keeps it local: user configuration is synced from servers to client machines along with the software configuration, and users sign in with a YubiKey. The names Sécurix and Bureautix are nods to the famous indomitable Gauls Astérix and Obélix, created by writer René Goscinny, who died in 1977 aged 51, and artist Albert Uderzo, who died in 2020 at 92. These ancient Gauls have outlived their creators: the latest album, Astérix in Lusitania came out in October 2025, and this vulture recommends it. ®
Categories: Linux fréttir
Waymo recalls 3,800 robotaxis after one drove itself into a flood
Waymo is recalling almost 3,800 robotaxis amid fears they may go off-script and drive into floods on high-speed roads. All 3,791 cars running Waymo’s fifth and sixth-generation Automated Driving Systems (ADS) are being taken off the road before they potentially injure passengers. "The software may allow the vehicle to slow and then drive into standing water on higher speed roadways," Waymo said in a letter [PDF] to the National Highway Traffic Safety Administration (NHTSA) this week. "Entering a flooded roadway can cause a loss of vehicle control, increasing the risk of a crash or injury." The Alphabet-owned robotaxi biz said all affected cars received an update on April 20, which increased "weather-related constraints and updated the vehicle maps," which served as an "interim remedy" while it works on a more permanent solution. This coincided with a case in San Antonio, Texas, on April 20, in which a car was caught on video - shared with broadcaster KSAT 12 - driving into floodwater and becoming stuck. “On 4/20/2026, an unoccupied Waymo AV encountered an untraversable flooded section of a roadway that has a 40 mph speed limit,” the company wrote in one document [PDF] supporting the recall notice. “The Waymo AV detected potentially untraversable flood water and proceeded at reduced speed.” Waymo temporarily suspended its services in San Antonio as a result and started pulling cars from the city’s fleet days after. The suspension remains in place today. The Register asked Waymo for more information. The company currently operates 24/7 driverless robotaxi services in Dallas, Houston, Los Angeles, Miami, Nashville, Orlando, Phoenix, and the San Francisco Bay Area. Waymo has also set its sights on launching in London in September, its first foray outside the US, pending necessary regulatory changes that would allow driverless cars to operate in the city. Test cars have already been spotted on the capital’s streets with trained experts behind the wheel, should any of the cars encounter issues, much like the deal Waymo agreed to in New York when the state handed its testing license back. As The Register previously reported, given the differences in the roads and other motoring infrastructure between the US and UK, Waymo will have to overcome unique challenges before opening its car doors to the public. In testing these vehicles now, Waymo is building a base of evidence to support its bid to operate in the UK. In recent years, however, the company has had to tackle some tricky PR hiccups, mainly related to safety – an issue that autonomous car companies often claim their tech will help improve, not hinder. Reports of serious issues, including cars ignoring red lights and veering into moving traffic, and killing dogs, sit alongside evidence of the technology helping to avoid potential freeway pile-ups, like a recent Waymo case study in LA shows. Serious issues continue to plague cars, and while they attract more media scrutiny than equivalent human-driver mishaps, public trust will remain strained until cases become far rarer. ®
Categories: Linux fréttir
Cisco To Cut Almost 4,000 Jobs In AI-Driven Restructuring
Cisco's stock soared 17% after the company announced it will cut nearly 4,000 jobs as it shifts investment and staffing toward higher-growth AI opportunities. CNBC reports: CEO Chuck Robbins wrote in a blog post on Wednesday that the latest round of job cuts will begin on May 14. Cisco is the latest company to announce head count reductions tied to AI. "The companies that will win in the AI era will be those with focus, urgency, and the discipline to continuously shift investment toward the areas where demand and long-term value creation are strongest," Robbins said. "I'm confident Cisco will be one of those winners. This means making hard decisions -- about where we invest, how we're organized, and how our cost structure reflects the opportunity in front of us."
Cisco said in a filing that severance and other costs will result in pre-tax charges of $1 billion, and that the company will recognize about $450 million of that in the fiscal fourth quarter. During the third quarter, Cisco announced switches and routers that use its next-generation processor. The company also debuted a leaderboard for ranking generative AI models based on their robustness against cybersecurity attacks.
Read more of this story at Slashdot.
Categories: Linux fréttir
