Linux fréttir
An anonymous reader quotes a report from Ars Technica: On Tuesday, Westinghouse announced that it had reached an agreement with the Trump administration that would purportedly see $80 billion of new nuclear reactors built in the US. And the government indicated that it had finalized plans for a collaboration of GE Vernova and Hitachi to build additional reactors. Unfortunately, there are roughly zero details about the deal at the moment. The agreements were apparently negotiated during President Trump's trip to Japan. An announcement of those agreements indicates that "Japan and various Japanese companies" would invest "up to" $332 billion for energy infrastructure. This specifically mentioned Westinghouse, GE Vernova, and Hitachi. This promises the construction of both large AP1000 reactors and small modular nuclear reactors. The announcement then goes on to indicate that many other companies would also get a slice of that "up to $332 billion," many for basic grid infrastructure. The report notes that no reactors are currently under construction and Westinghouse's last two projects ended in bankruptcy. According to the Financial Times, the government may share in profits and ownership if the deal proceeds.
Read more of this story at Slashdot.
At TechCrunch Disrupt 2025, Waymo co-CEO Tekedra Mawakana said society will ultimately accept a fatal robotaxi crash as part of the broader tradeoff for safer roads overall. TechCrunch reports: The topic of a fatal robotaxi crash came up during Mawakana's interview with Kristen Korosec, TechCrunch's transportation editor, during the first day of the outlet's annual Disrupt conference in San Francisco. Korosec asked Mawakana about Waymo's ambitions and got answer after answer about the company's all-consuming focus on safety. The most interesting part of the interview arrived when Korosec brought on a thought experiment. What if self-driving vehicles like Waymo and others reduce the number of traffic fatalities in the United States, but a self-driving vehicle does eventually cause a fatal crash, Korosec pondered. Or as she put it to the executive: "Will society accept that? Will society accept a death potentially caused by a robot?"
"I think that society will," Mawakana answered, slowly, before positioning the question as an industrywide issue. "I think the challenge for us is making sure that society has a high enough bar on safety that companies are held to." She said that companies should be transparent about their records by publishing data about how many crashes they're involved in, and she pointed to the "hub" of safety information on Waymo's website. Self-driving cars will dramatically reduce crashes, Mawakana said, but not by 100%: "We have to be in this open and honest dialogue about the fact that we know it's not perfection."
Circling back to the idea of a fatal crash, she said, "We really worry as a company about those days. You know, we don't say 'whether.' We say 'when.' And we plan for them." Korosec followed up, asking if there had been safety issues that prompted Waymo to "pump the breaks" on its expansion plans throughout the years. The co-CEO said the company pulls back and retests "all the time," pointing to challenges with blocking emergency vehicles as an example. "We need to make sure that the performance is backing what we're saying we're doing," she said. [...] "If you are not being transparent, then it is my view that you are not doing what is necessary in order to actually earn the right to make the roads safer," Mawakana said.
Read more of this story at Slashdot.
NVIDIA has introduced NVQLink, an open system architecture that directly connects quantum processors with GPU-based supercomputers. The Quantum Insider reports: The new platform connects the high-speed, high-throughput performance of NVIDIA's GPU computing with quantum processing units (QPUs), allowing researchers to manage the intricate control and error-correction workloads required by quantum devices. According to a NVIDIA statement, the system was developed with guidance from researchers at major U.S. national laboratories including Brookhaven, Fermi, Lawrence Berkeley, Los Alamos, MIT Lincoln, Oak Ridge, Pacific Northwest, and Sandia.
Qubits, the basic units of quantum information, are extremely sensitive to noise and decoherence, making them prone to errors. Correcting and stabilizing these systems requires near-instantaneous feedback and coordination with classical processors. NVQLink is meant to meet that demand by providing an open, low-latency interconnect between quantum processors, control systems, and supercomputers -- effectively creating a unified environment for hybrid quantum applications.
The architecture offers a standardized, open approach to quantum integration, aligning with the company's CUDA-Q software platform to enable researchers to develop, test, and scale hybrid algorithms that draw simultaneously on CPUs, GPUs, and QPUs. The U.S. Department of Energy (DOE) -- which oversees several of the participating laboratories -- framed NVQLink as part of a broader national effort to sustain leadership in high-performance computing, according to NVIDIA.
Read more of this story at Slashdot.
Internal dependencies again prove problematic
Amazon Web Services’ US-EAST-1 region, which last week caused massive disruption to online services, is having another bad day as internal dependencies again prove problematic.…
darwinmac writes: Ubuntu Unity is staring at a possible shutdown. A community moderator has gone public pleading for help, admitting the project is "broken and needs to be fixed." Neowin reports the distro is suffering from critical bugs so severe that upgrades from 25.04 to 25.10 are failing and even fresh installs are hit. The moderator admits they lack the technical skill or time to perform a full rescue and is asking the broader community, including devs, testers, and UI designers, to step in so Ubuntu Unity can reach 26.04 LTS. If no one steps in soon, this community flavor might quietly fade away once more.
Read more of this story at Slashdot.
An anonymous reader quotes a report from NBC News: Two senators said they are announcing bipartisan legislation on Tuesday to crack down on tech companies that make artificial intelligence chatbot companions available to minors, after complaints from parents who blamed the products for pushing their children into sexual conversations and even suicide. The legislation from Sens. Josh Hawley, R-Mo, and Richard Blumenthal, D-Conn., follows a congressional hearing last month at which several parents delivered emotional testimonies about their kids' use of the chatbots and called for more safeguards.
"AI chatbots pose a serious threat to our kids," Hawley said in a statement to NBC News. "More than seventy percent of American children are now using these AI products," he continued. "Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology." Sens. Katie Britt, R-Ala., Mark Warner, D-Va., and Chris Murphy, D-Conn., are co-sponsoring the bill.
The senators' bill has several components, according to a summary provided by their offices. It would require AI companies to implement an age-verification process and ban those companies from providing AI companions to minors. It would also mandate that AI companions disclose their nonhuman status and lack of professional credentials for all users at regular intervals. And the bill would create criminal penalties for AI companies that design, develop or make available AI companions that solicit or induce sexually explicit conduct from minors or encourage suicide, according to the summary of the legislation. "In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide," Blumenthal said in a statement. "Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties."
"Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety," he continued.
Read more of this story at Slashdot.
Corporate restructuring will benefit ... uh, humanity
OpenAI has obtained a new lease on life.…
hackingbear shares a report from Crypto News: Two Chinese artificial intelligence (AI) models, DeepSeek V3.1 and Alibaba's Qwen3-Max, have taken a commanding lead over their US counterparts in a live real-world real-money cryptocurrency trading competition, posting triple-digit gains in less than two weeks. According to Alpha Arena, a real-market trading challenge launched by US research firm Nof1, DeepSeek's Chat V3.1 turned an initial $10,000 into $22,900 by Monday, a 126% increase since trading began on October 18, while Qwen 3 Max followed closely with a 108% return.
In stark contrast, US models lagged far behind. OpenAI's GPT-5 posted the worst performance, losing nearly 60% of its portfolio, while Google DeepMind's Gemini 2.5 Pro showed a similar 57% decline. xAI's Grok 4 and Anthropic's Claude 4.5 Sonnet fared slightly better, returning 14% and 23% respectively. "Our goal with Alpha Arena is to make benchmarks more like the real world -- and markets are perfect for this," Nof1 said on its website.
Read more of this story at Slashdot.
A report from cyber-insurer At-Bay fingers Cisco and Citrix VPNs as most likely to lead to ransomware trouble
Organizations using Cisco and Citrix VPN devices were nearly seven times as likely to suffer a ransomware infection over a 15-month period, according to At-Bay, a provider of cyber insurance and a vendor of managed detection and response products.…
The Python Software Foundation rejected a $1.5 million U.S. government grant because it required them to renounce all diversity, equity, and inclusion initiatives. "The non-profit would've used the funding to help prevent supply chain attacks; create a new automated, proactive review process for new PyPI packages; and make the project's work easily transferable to other open-source package managers," reports The Register. From the report: The programming non-profit's deputy executive director Loren Crary said in a blog post today that the National Science Founation (NSF) had offered $1.5 million to address structural vulnerabilities in Python and the Python Package Index (PyPI), but the Foundation quickly became dispirited with the terms (PDF) of the grant it would have to follow. "These terms included affirming the statement that we 'do not, and will not during the term of this financial assistance award, operate any programs that advance or promote DEI [diversity, equity, and inclusion], or discriminatory equity ideology in violation of Federal anti-discrimination laws,'" Crary noted. "This restriction would apply not only to the security work directly funded by the grant, but to any and all activity of the PSF as a whole."
To make matters worse, the terms included a provision that if the PSF was found to have voilated that anti-DEI diktat, the NSF reserved the right to claw back any previously disbursed funds, Crary explained. "This would create a situation where money we'd already spent could be taken back, which would be an enormous, open-ended financial risk," the PSF director added. The PSF's mission statement enshrines a commitment to supporting and growing "a diverse and international community of Python programmers," and the Foundation ultimately decided it wasn't willing to compromise on that position, even for what would have been a solid financial boost for the organization. "The PSF is a relatively small organization, operating with an annual budget of around $5 million per year, with a staff of just 14," Crary added, noting that the $1.5 million would have been the largest grant the Foundation had ever received - but it wasn't worth it if the conditions were undermining the PSF's mission. The PSF board voted unanimously to withdraw its grant application.
Read more of this story at Slashdot.
An anonymous reader quotes a report from Variety: A news special on Britain's Channel 4 titled "Will AI Take My Job?" investigated how automation is reshaping the workplace and pitting humans against machines. At the end of the hour-long program, a major twist was revealed: the anchor, who narrates and appears throughout the telecast reporting from different locations, was entirely AI-generated.
In the final moments of the special, the host says: "AI is going to touch everybody's lives in the next few years. And for some, it will take their jobs. Call center workers? Customer service agents? Maybe even TV presenters like me. Because I'm not real. In a British TV first, I'm an AI presenter. Some of you might have guessed: I don't exist, I wasn't on location reporting this story. My image and voice were generated using AI."
The hour aired Monday at 8 p.m. as part of the "Dispatches" documentary program, which Channel 4 says is now the first British television show to feature an AI presenter. The "anchor" was produced by AI fashion brand Seraphinne Vallora for Kalel Productions and was guided by prompts to create a realistic on-camera performance. "The use of an AI presenter is not something we will be making a habit of at Channel 4 -- instead our focus in news and current affairs is on premium, fact checked, duly impartial and trusted journalism -- something AI is not capable of doing," said Louisa Compton, Channel 4's head of news and current affairs. "But this stunt does serve as a useful reminder of just how disruptive AI has the potential to be -- and how easy it is to hoodwink audiences with content they have no way of verifying."
Read more of this story at Slashdot.
Ólafur Waage has an unusual take on "will it run Doom?"
Ubuntu Summit Doom takes place on Mars, but up until recently, it has only been played on Earth. However, at the Ubuntu Summit, one enterprising developer explained how he extended the well-established "will it run Doom?" meme all the way into space.…
A cyclist who received severe third-degree burns to his head after being struck by a drunk driver has been fitted with a printed 3D face. The Guardian: Dave Richards, 75, was given a 3D prosthetic by the NHS that fits the space on his face and mimics his hair colour, eye colour and skin. [...] While recovering, he was referred to reconstructive prosthetics, which has opened the Bristol 3D medical centre, the first of its kind in the UK to have 3D scanning, design and printing in a single NHS location. Richards, from Devon, said surgeons tried to save his eye but "they were worried any infection could spread from my eye down the optic nerve to the brain so the eye was removed."
[...] He called the process of getting a 3D-printed face "not the most pleasant." He added: "In the early days of my recovery, I felt very vulnerable, and would not expose myself to social situations. It took me a long time to feel comfortable about my image, how I thought people looked at me and what they thought of me -- but I have come a long way in that respect."
Read more of this story at Slashdot.
100,000 Blackwell GPUs and 2,200 exaFLOPs make for a big system
The US Department of Energy is partnering with Nvidia and Oracle to build seven new AI supercomputers to accelerate scientific research and develop agentic AI for discovery. Two of these systems, located at Argonne National Laboratory, will together form the DOE's largest AI supercomputing infrastructure.…
Nearly nine in ten Windows games can now run on Linux systems, according to data from ProtonDB compiled by Boiling Steam. The gains came through work by developers of WINE and Proton translation layers and through interest in hardware like the Steam Deck.
ProtonDB tracks games across five categories. Platinum-rated games run perfectly without adjustment. Gold titles need minor tweaks. Silver games are playable but imperfect. Bronze exists between silver and borked. Borked games refuse to launch. The proportion of new releases earning platinum ratings has grown. The red and dark red zones have thinned. Some popular titles remain incompatible, however. Boiling Steam noted that other developers appear averse to non-Windows gamers.
Read more of this story at Slashdot.
Humanity has failed to limit global heating to 1.5C and must change course immediately, the secretary general of the UN has warned. From a report: In his only interview before next month's Cop30 climate summit, Antonio Guterres acknowledged it is now "inevitable" that humanity will overshoot the target in the Paris climate agreement, with "devastating consequences" for the world. He urged the leaders who will gather in the Brazilian rainforest city of Belem to realise that the longer they delay cutting emissions, the greater the danger of passing catastrophic "tipping points" in the Amazon, the Arctic and the oceans.
"Let's recognise our failure," he told the Guardian and Amazon-based news organisation Sumauma. "The truth is that we have failed to avoid an overshooting above 1.5C in the next few years. And that going above 1.5C has devastating consequences. Some of these devastating consequences are tipping points, be it in the Amazon, be it in Greenland, or western Antarctica or the coral reefs.
He said the priority at Cop30 was to shift direction: "It is absolutely indispensable to change course in order to make sure that the overshoot is as short as possible and as low in intensity as possible to avoid tipping points like the Amazon. We don't want to see the Amazon as a savannah. But that is a real risk if we don't change course and if we don't make a dramatic decrease of emissions as soon as possible."
Read more of this story at Slashdot.
The pair intends to develop cellular infrastructure for running edge AI workloads
Nvidia CEO Jensen Huang on Tuesday announced a partnership with Nokia to integrate AI technology into its mobile network infrastructure, bringing accelerated computing to the edge and paving the way for 6G-ready networks. As part of the deal, Nvidia will invest $1 billion in Nokia. Team Green's gear will boost spectral efficiency and make AI inference more accessible from mobile devices.…
Nearly three out of every four restaurant orders are no longer eaten in a restaurant, according to the National Restaurant Association. The share of customers using delivery more than doubled from 2019 to 2024, and 41% of respondents in a recent poll said delivery was an essential part of their lifestyle. The transformation has fundamentally altered restaurant economics. Delivery companies charge restaurants commissions between 5 and 30%, along with fees for payment processing, advertising, and search placement.
Shannon Orr runs an eight-restaurant group on the West Coast. One of her restaurants generated $1.7 million in delivery sales last year. Of that, $400,000 went to delivery companies. The restaurant, previously among her most profitable, made no money in 2024, she told the Atlantic.
About a third of full-service restaurants have modified their physical spaces to accommodate the delivery boom, installing dedicated entrances, bike parking, and banks of lockers.
Read more of this story at Slashdot.
OpenAI has committed to spend about $1.4 trillion on infrastructure so far, equating to roughly 30 gigawatts of data center capacity, CEO Sam Altman said on Tuesday. From a report: The statement helps clarify the many announcements the company has made with its chip, data center and financing partners. That total includes the already announced deals with AMD, Broadcom, Nvidia, Oracle and other partners. That's just the starting point, Altman said. Over time, the company would like to have in place a technical and financial apparatus that would allow it to build a gigawatt of new capacity per week at a cost of around $20 billion per gigawatt.
Read more of this story at Slashdot.
By appearing more human, it evades detection
A new Android malware strain, Herodotus, steals credentials, logs keystrokes, streams victims' screens, and hijacks input - but with a twist: it mimics human typing by adding random delays between keystrokes to evade behavioral fraud detection systems.…
Pages
|