Linux fréttir

Open Source Registries Join Linux Foundation Working Group to Address Machine-Generated Traffic

Slashdot - Sun, 2026-05-10 01:34
Under the nonprofit Linux Foundation, "a new Sustaining Package Registries Working Group will seek to identify concrete funding, governance, and security practices," reports ZDNet, "to keep code flowing as download counts grow.... Because software builds, continuous integration pipelines, and AI systems hammer registries at machine speed rather than human speed, the sites can't keep up. "That growth has brought a surge in bot traffic, automated publishing, security reports, and outright abuse, exposing what the working group bluntly calls a 'sustainability gap'." Sonatype CTO Brian Fox, who oversees the Maven Central Java registry, estimates open-source registries saw 10 trillion downloads in 2025. And "The same pattern is appearing across ecosystems. More machine traffic. More automation. More scanning. More expectations around uptime, integrity, provenance, and policy enforcement. More cost. More support burden. More dependency on infrastructure that the industry still talks about as though it runs on goodwill and spare time." ZDNet reports that "To tackle that, Sonatype has teamed up with the Linux Foundation and other package registry leaders, including Alpha-Omega, Eclipse Foundation (OpenVSX), OpenJS Foundation, OpenSSF, Packagist, Python Software Foundation, Ruby Central (RubyGems), and the Rust Foundation (Crates)." The idea is to give operators a neutral forum to discuss money, governance, and shared operational burdens openly. Once that's dealt with, they'll coordinate how to explain those realities back to companies and organizations that have long assumed registries are "free." No, they're not. They never were. As the Linux Foundation pointed out, "Registries today run primarily on two things: (1) infrastructure donations and credits; and (2) heroic efforts from small paid teams (themselves funded by donations and grants) and unpaid volunteers that operate and maintain registry services. The bulk of donations and grants comes from a small set of donors and doesn't scale with demands on the registry." The working group is explicitly positioned as a venue where registry leaders and ecosystem stakeholders can align on "practical, community-minded" ways to sustain that infrastructure, rather than each operator improvising its own survival plan in isolation. ZDNet says the group will also coordinate security practices and information, and craft frameworks "that make it politically and legally possible to introduce sustainable funding models without fracturing communities." And they will also "align messaging and educational content so developers, companies, and policymakers finally understand what it costs to run these services."

Read more of this story at Slashdot.

Categories: Linux fréttir

Will Maryland's Utility Bills Increase $1.6B to Support Other States' Datacenters?

Slashdot - Sat, 2026-05-09 22:34
To upgrade its grid for data centers, PJM Interconnection (which serves 13 states) plans to spend $22 billion — and charge nearly $2 billion of that to customers in Maryland, argues Maryland's Office of People's Counsel. The money "will be recovered in rates for decades" and "drive up Maryland customer bills by $1.6 billion over the next ten years alone," they said Friday, announcing an official complaint filed with America's Federal Energy Regulatory Commission. Extra demand is expected from Ohio, Pennsylvania, and Illinois "where demands driven by data centers are projected to grow substantially by 2036," they explain. But that means that Maryland customers "are subsidizing data center-driven transmission buildout by virtue of geographic proximity..." Tom's Hardware explains: That means an extra $823 million for residential (approx. $345 per customer), $146 million for commercial (approx. $673 per customer), and $629 million for industrial customers (approx. $15,074 per customer)... "Maryland customers have neither caused the need for these billions in new transmission projects nor will they meaningfully benefit from them," [according to Maryland People's Counsel David S. Lapp].... This is one of the biggest reasons why many AI hyperscalers are facing pushback from the communities where they intend to place their data centers. At the moment, around 69 jurisdictions have passed some sort of moratorium on projects like these, and a survey has shown that nearly half of Americans do not want a data center in their neighborhood. Debates around these projects are passionate, with a few cases turning violent and even resulting in shootings (thankfully, without any casualties), especially as many feel that the construction of these power-hungry assets is threatening their lifestyles and quality of life. Thanks to long-time Slashdot reader noshellswill for sharing the news.

Read more of this story at Slashdot.

Categories: Linux fréttir

Rush Rescue Mission for NASA's $500M Space Telescope Passes Key Milestone

Slashdot - Sat, 2026-05-09 21:34
NASA's $500 million Neil Gehrels Swift space observatory was launched in 2004. But it's now "at risk of falling back through the atmosphere and burning up without intervention," reports Spaceflight Now. Fortunately, a mission to prevent that "just passed a notable prelaunch testing milestone." On Friday, NASA announced that the Link spacecraft, manufactured by Katalyst Space Technologies to intervene before Swift's fate is sealed, completed its slate of environmental testing at the agency's Goddard Space Flight Center in Greenbelt, Maryland... "Swift will likely re-enter the atmosphere sometime later this year if we don't attempt to lift it to a higher altitude, [said John Van Eepoel, Swift's mission director at NASA Goddard, in a NASA press release]. "Katalyst has gotten to this point in just eight months, and we're glad they were able to use NASA's facilities to test Link and draw on our expertise to help tackle questions that popped up along the way...." "Given how quickly Swift's orbit is decaying, we are in a race against the clock, but by leveraging commercial technologies that are already in development, we are meeting this challenge head-on," said Shawn Domagal-Goldman, acting director, Astrophysics Division, NASA Headquarters, at the time... Attempting an orbit boost is both more affordable than replacing Swift's capabilities with a new mission, and beneficial to the nation — expanding the use of satellite servicing to a new and broader class of spacecraft...." Swift is in an orbit inclined 20.6 degrees from the equator, which is why Katalyst selected Northrop Grumman's Pegasus XL air-launched rocket in November to fly the mission. "The versatility offered by Pegasus' unique air-launch capability provides customers with a space launch solution that can be rapidly deployed anywhere on Earth to reach any orbit," said Kurt Eberly, Director of Space Launch for Northrop Grumman. The mission is set to launch in June.

Read more of this story at Slashdot.

Categories: Linux fréttir

The Trump Phone Either Is Or Isn't Closer To Delivery

Slashdot - Sat, 2026-05-09 20:34
September 2025? January 2026? Delivery dates keep slipping for the Trump Organization's "Trump Phone" — a gold-coloured Android smartphone priced at $499 (£370). But in March the Verge spotted signs the phone was moving forward: FCC listings for a smartphone with the trade name "T1" show that it was tested late last year, and granted certification by the FCC in January... [T]he phone was submitted for testing by another company entirely: Smart Gadgets Global, LLC... Smart Gadgets Global's website promises "Top Quality Electronics created for 'YOUR' customer!" But in April the Trump phone revised its "Terms and Conditions" for preorders. The new language? A preorder deposit provides only a conditional opportunity if Trump Mobile later elects, in its sole discretion, to offer the Device for sale. A deposit is not a purchase, does not constitute acceptance of an order, does not create a contract for sale, does not transfer ownership or title interest, does not allocate or reserve specific inventory, and does not guarantee that a Device will be produced or made available for purchase.... Estimated ship dates, launch timelines, or anticipated production schedule are non-binding estimates only. Trump Mobile does not guarantee that: the Device will be commercially released... Trump Mobile will not be responsible for delay, modification, or failure to release a Device due to causes beyond its reasonable control, including but not limited to regulatory review, carrier certification delays, component shortages, labor disruptions, governmental orders, acts of God, transportation interruptions, or third-party supplier failures... If Trump Mobile cancels or discontinues the Device offering prior to sale, Trump Mobile will issue a full refund of the deposit amount paid... If Trump Mobile cancels, delays, or does not release the Device, your sole and exclusive remedy is a full refund of the deposit amount actually paid, and you waive any claim for equitable, injunctive, or specific performance relief relating to preorder priority or Device allocation. There was an unconfirmed report on social media that the updated Terms were also emailed to customers (cited by the International Business Times). And the new language also hedges that for the gold T1 phone, "Images, prototypes, beta demonstrations, and marketing renderings are illustrative only and may not reflect final production units...." But then eight days ago The Verge reported that phone "has just passed another milestone on its slow road to release," described as "a requirement for any phone launching in the US..." "The phone has received the little-known PTCRB certification, a first step toward being certified to work on major networks and be issued with IMEI numbers." [A]t least, I think it's been certified. What's actually been certified by the PTCRB is the SGG-06, a smartphone from Smart Gadgets Global, LLC, with support for 5G, 4G, 3G, and 2G networks.

Read more of this story at Slashdot.

Categories: Linux fréttir

Plant Seeds Do Something Incredible When the Sound of Rain Strikes

Slashdot - Sat, 2026-05-09 19:34
"Plant seeds can sense the vibrations generated by falling raindrops," reports ScienceAlert, "and respond by waking from their state of dormancy to welcome the water, new research shows.... to germinate in 'anticipation' of the coming deluge." The finding, discovered by MIT mechanical engineers Nicholas Makris and Cadine Navarro, offers the first direct evidence that seeds and seedlings can sense and respond to sounds in nature... "The energy of the rain sound is enough to accelerate a seed's growth," [explains Markis]. Plants don't have the same aural equipment we do to actually hear sounds, of course. But the study suggests that seeds respond to the same vibrations that can produce a sound experience in our human ears. Across a series of experiments, the researchers submerged nearly 8,000 rice seeds in shallow tubs of water, at a depth of around 3 centimeters (1 inch), and exposed some of them to falling water drops over periods of six days... A hydrophone recorded the acoustic vibrations produced by the drops, confirming that the experiment mimicked the vibrations produced by actual raindrops falling in nature — such as the driving downpours that can sometimes pelt Massachusetts' puddles, ponds, and wetlands... In their study, the researchers observed that seeds exposed to the falling drops germinated up to around 37% faster, compared with seeds that did not receive the simulated rainstorm treatment but were housed in otherwise identical conditions. More information in Scientific American and Scientific Reports.

Read more of this story at Slashdot.

Categories: Linux fréttir

Cisco Releases Open-Source 'DNA Test for AI Models'

Slashdot - Sat, 2026-05-09 18:34
Cisco has released an open-source tool "to trace the origins of AI models," reports SC World, "and compare model similarities for great visibility into the AI supply chain." [Cisco's Model Provenance Kit] is a Python toolkit and command-line interface (CLI) that looks at signals such as metadata and weights to create a "fingerprint" for AI models that can then be compared to other model fingerprints to determine potential shared origins. "Think of Model Provenance Kit as a DNA test for AI models," Cisco researchers wrote. "[...] Much like a DNA test reveals biological origins, the Model Provenance Kit examines both metadata and the actual learned parameters of a model (like a unique genome that comprises a model), to assess whether models share a common origin and identify signs of modification." The tool aims to address gaps in visibility into the AI model supply chain. For example, many organizations utilize open-source models from repositories like HuggingFace, where models could potentially be uploaded with incomplete or deceptive documentation. The Model Provenance Kit provides a way for organizations to verify claims about a model's origins, such as claims that a model is trained from scratch, when in reality it may be copied from another model, Cisco said. This may put organizations at risk of using models with unknown biases, vulnerabilities or manipulations and make it more difficult to resolve any incidents that arise from these risks. Thanks to Slashdot reader spatwei for sharing the news.

Read more of this story at Slashdot.

Categories: Linux fréttir

Google tweaks Chrome AI privacy wording, insists processing stays on-device

TheRegister - Sat, 2026-05-09 17:57
Google has changed Chrome's disclosure language about how its on-device AI works, but that doesn't mean the company intends to capture on-device AI interactions. The Chrome menu modification, which isn't universally rolled out yet even in Chrome 148, was noted this week on Reddit. The "On-device AI" message in Chrome's System settings previously read, "To power features like scam detection, Chrome can use AI models that run directly on your device without sending your data to Google servers. When this is off, these features might not work." But the message changed recently – it lost the phrase "without sending your data to Google servers." That prompted privacy advocate Alexander Hanff to question whether the edit signaled an architectural change that would see local AI interactions processed by Google servers instead of remaining on-device. "Why was the sentence 'without sending your data to Google servers' removed from the on-device AI description in Chrome's Settings UI?" Hanff asked. "Was the previous text inaccurate? Has the architecture changed? Was the wording withdrawn on legal advice because Google was unwilling to defend it as a representation?" Asked about this, a Google spokesperson said, "This doesn’t reflect a change to how we handle on-device AI for Chrome. The data that is passed to the model is processed solely on device." It appears this situation deserves a more genteel rendering of Hanlon's Razor – "Never attribute to malice that which is adequately explained by stupidity." In this case, it's "Never attribute to malice that which is adequately explained by bad timing." Word of the menu modification surfaced as Chrome was rolling out the Prompt API, which is designed to provide web pages with a programmatic way to interact with a browser-resident AI model. The API's arrival and public discussion of it drew attention to the fact that Chrome has been silently downloading Google's 4GB Nano model onto users' devices. The coincidence of these events made it seem that Google was preparing to capture on-device prompts and responses, which would be a significant privacy retreat. In fact, Chrome has been letting Nano sleep on the couch for early adopters dating back two years when local AI was implemented in Chrome 126 as a preview program. While Google hasn't yet made model downloading and storage opt-in, the biz did earlier this year implement a way to deactivate and remove the space-hogging model. "We’ve offered Gemini Nano for Chrome since 2024 as a lightweight, on-device model," a Google spokesperson explained, pointing to relevant help documentation. "It powers important security capabilities like scam detection and developer APIs without sending your data to the cloud. While this requires some local space on the desktop to run, the model will automatically uninstall if the device is low on resources. In February, we began rolling out the ability for users to easily turn off and remove the model directly in Chrome settings. Once disabled, the model will no longer download or update." The edit to the "On-device AI" message occurred in early April. According to Google, Gemini Nano in Chrome processes all data on-device. But when websites interact with Gemini Nano in Chrome – via the Prompt API, for example – they can see the inputs and outputs of the model. In such cases, the data handling would fall under the privacy policy of the website interacting with the user's Nano instance. Google decided to change its "On-device AI" message to avoid confusion – and perhaps to preclude legal claims alleging policy violations – when the user is interacting with a Google site that calls out to the Nano model on-device, in support of some service it provides. In that scenario, the Google site would have access to the prompts it sends and responses it gets from the user's on-device model. That interaction would happen "without sending your data to Google servers," at least in the context of a user querying a model running in Google Cloud. But since the user's on-device Chrome-resident Nano model would send data to the Google site in response to that site's API calls, that data transmission might be interpreted as a violation of the local AI commitment language. Hence the edit. Google's decision to have Gemini Nano become a Chrome squatter is a novel way of doing things, given that co-opting people's computing resources has largely been the province of covert crypto-mining scripts. But perhaps after years of offering Gmail and Search at no monetary cost, Google feels entitled to a few gigabytes of Chrome users' local storage and occasional bursts of their on-device compute. ®
Categories: Linux fréttir

Social Media Sites Got Information from Ad Trackers on US State Health Insurance Sites

Slashdot - Sat, 2026-05-09 17:34
All 20 of America's state-run healthcare marketplace sites "include advertising trackers that share information with Big Tech companies," reports Gizmodo, citing a report from Bloomberg: Per the report, seven million Americans bought their health insurance through state exchanges in 2026, and many of them may have had personal information shared with companies, including Meta, TikTok, Snap, Google, Nextdoor, and LinkedIn, among others. Some of the data collected and shared with those companies included ZIP codes, a person's sex and citizenship status, and race. In addition to potentially sensitive biographical details about a person, the trackers also may reveal additional details about their life based on the sites they visit. For instance, Bloomberg found trackers on Medicaid-related web pages in Rhode Island, which could reveal information about a person's financial status and need for assistance. In Maryland, a Spanish-language page titled "Good News for Noncitizen Pregnant Marylanders" and a page designed to help DACA recipients navigate their healthcare options were found to be transmitting data to Big Tech firms... Per Bloomberg, several states have already removed some trackers from their exchange websites following the report. Thanks to Slashdot reader JoeyRox for sharing the news.

Read more of this story at Slashdot.

Categories: Linux fréttir

10 People Called Police to Report Bigfoot Sighting in Ohio

Slashdot - Sat, 2026-05-09 16:34
CNN reports on a "sudden surge of claimed sightings" of "unidentified figures averaging 8 feet tall in wooded areas" along Ohio's Mahoning River. "And it stopped just as quickly as it started," says Jeremiah Byron, host of the Bigfoot Society Podcast, which collected and mapped the reports .... Byron doesn't take every report at face value, making sure he talks to people directly before publicizing their claims. Once word got out about the reports in Ohio, so did the obvious fakes. "I started to get a lot of AI-generated reports in my email. It got up to the point where I was probably getting about 1,000 emails a day," he says. But when Byron spoke by phone with people who made the initial reports, they convinced him they weren't making anything up. "It was obvious they weren't just wanting to get their name out there," says Byron. "They were just freaked out by what they experienced, and they didn't want anything else to do with it." [...] Local law enforcement in Ohio also seem to be enjoying the publicity. Portage County Sheriff Bruce D. Zuchowski made a series of gag posts purporting to show the arrest of Bigfoot and his detention by Immigration and Customs Enforcement, only for the creature to escape from custody at the Canadian border... Despite the levity, the sheriff's office really did get some calls from concerned residents, Zuchowski says. "Ten individual people were like, 'Yeah I was walking my dog at 4 a.m. and I saw this hairy figure and I smelled this musty odor and there was this big thing and all of a sudden it ran,'" the sheriff told CNN affiliate WOIO in March.

Read more of this story at Slashdot.

Categories: Linux fréttir

Newspaper Chain's Reporters Withhold Their Bylines to Protest 'AI-Assisted' Articles

Slashdot - Sat, 2026-05-09 15:34
A chain of 30 U.S. newspapers including the Sacramento Bee, the Miami Herald and the Idaho Statesman "has started to use a new AI tool that can summarize traditional articles and spit out different versions for different audiences," reports the New York Times. And the chain's reporters "are not happy about it." Journalists in many of the company's newsrooms are now withholding their bylines from articles created by the new tool, meaning that those articles will run with a generic credit rather than a reporter's name, as is customary. They are also labeled AI-assisted. "We don't want to put our bylines on stories we did not actually write even if they're based on our work," said Ariane Lange, an investigative reporter at the Sacramento Bee and the vice chair of the Sacramento Bee News Guild. "That in itself feels like a lie." The reporters' byline strike is one of the sharpest conflicts yet between journalists and their companies over the use of AI. Related debates are playing out in newsrooms across the country, as publishers experiment with new AI tools to streamline work that used to take hours, and some even use it to write full articles... [E]xecutives have promoted the tool internally as a way to increase the number of articles published and ultimately gain new subscribers... [Eric Nelson, the vice president of local news] said using reporters' bylines on the AI-generated articles was a way to show "authority" on Google so the search engine would rank the articles higher in the results. He also said the company was experimenting with feeding in reporters' notes to create articles. "Journalists who embrace and experiment with this tool are going to win," Nelson said in the meeting. "Journalists who are defiant will fall behind".... McClatchy's public AI policy states that the company uses AI tools to summarize articles to "help readers quickly understand the main points of a single story or catch up on multiple stories about a larger topic," and that editors review the output before publication.

Read more of this story at Slashdot.

Categories: Linux fréttir

Why Some US Schools Are Cutting Back On the Technology They Spent Billions On

Slashdot - Sat, 2026-05-09 14:34
America's school districts "spent billions on technology during the pandemic," reports the Washington Post. "But now some states are limiting in-school screen time because of concerns about its impact on children." Nationwide [U.S.] schools invested at least $15 billion and possibly as much as $35 billion from federal pandemic relief funds on laptops, learning software and other technology between 2020 and 2024, according to an estimate by the Edunomics Lab, an education think tank. By last school year, 88% of public schools reported in a federal survey they had given every child a laptop, tablet or similar device. Now, some states and school districts are walking back their technology use following pressure from parents who claim too much in-school screen time has zapped children's attention spans and left them worse off academically. At least a dozen states introduced or adopted policies this year that attempt to regulate screen time in schools — from prescribing limits to allowing families to opt out of virtual instruction... In Missouri, a bill would require every school district in that state to come up with a screen time policy is making its way through the state legislature. "Ed tech is just big tech in a sweater vest," said Missouri state Rep. Tricia Byrnes (R), who introduced the legislation and blames what she described as the overuse of technology for middling test scores... Complicating the issue is research that shows students do not see any academic gains when provided with laptops. A meta-analysis of studies on reading comprehension suggests paper-based texts are better than digital-based reading... A body of research has established that excessive or unstructured screen time can have detrimental effects on children, including harming language development, weakening social skills and triggering anxiety and depression. But the effects of school-issued devices and in-school usage on children's development are less understood, said Tiffany Munzer, a developmental behavioral pediatrician and digital media researcher at the University of Michigan. Some studies report that high-quality digital tools can support students' learning goals, Munzer said. But "a lot of the apps that are marketed as educational ... are not actually educational and contain a lot of commercialized content."

Read more of this story at Slashdot.

Categories: Linux fréttir

macOS 27 threatens to bury Time Capsule, FOSS brings a shovel

TheRegister - Sat, 2026-05-09 12:25
The next major release of macOS looks likely to remove Apple Filing Protocol (AFP) support, stopping Time Capsules from working… but life FOSS, uh, finds a way. The current version of macOS "Tahoe" 26.4 already has network Time Machine issues, especially for folks using Apple Time Capsules. It looks like macOS 27 may completely remove the network protocol they need. However, the Time Capsules run NetBSD under the hood, and that means that the FOSS world has been able to come up with a workaround. It's called TimeCapsuleSMB, and it aims to keep older Time Capsules usable with modern macOS. It's eight months since Apple released macOS 26, and the company's annual release schedule means that macOS 27 is looming. Although Cupertino hasn't told the world much about it yet, it is warning sysadmins to "prepare your network environment for stricter security requirements." Reading the bulletin, we found it rather clixby: while it firmly warns that security checks will become stricter, it doesn't spell out what products will change or how. Happily, there are elder Mac gurus out there who interpret Apple's sometimes Delphic utterances, and Howard Oakley is one of the greatest. In a post about networking changes coming in macOS 27, he translates that it will require TLS 1.2 or above. (The Register explained TLS back in 2002, and version 1.2 appeared about six years later.) However, he also warns that it could mean the end of AFP, which is basically Appletalk-over-TCP/IP version 3.4. AppleTalk was the Mac network protocol for file sharing from System 6 onward. In 2013, OS X 10.9 "Mavericks" made Microsoft's SMB the default file-sharing protocol in place of AFP, and it looks like AFP now faces the ax: it was officially deprecated in macOS 15.5. To be fair, macOS 26 Macs started displaying a warning to Time Capsule users nearly a year ago. Apple introduced the first model of Time Capsule in 2008, and the fifth-generation version in 2013. The company discontinued the whole AirPort product line in 2018. All generations only support AFP and SMB version 1. That’s the original version that appeared with LAN Manager in 1987, and we reported on Samba dropping SMB1 back in 2022. The good news is that even if Apple kills its original file-sharing protocol next year, the FOSS community is on the case and won't let working kit die. The Time Capsule hardware is essentially a box containing a Wi-Fi access point and a hard disk, and an Arm chip with just enough software to share that HDD as network-attached storage. Apple didn't write this software from scratch: it picked up and customized NetBSD for the job. The first four generations of Time Capsule (flat square boxes) run NetBSD 4, and the fifth-gen devices – the tall tower-shaped models from 2013 onward – run NetBSD 6. That gave Microsoft's James Chang an opening. Since the devices run NetBSD, it's possible to compile a newer version of Samba, and copy it somewhere that the tiny embedded Arm computer can find it. Teaching such old kit a new trick is never that easy, though, and he faced a number of challenges, which he details in the design section of the project README. Among them are machines that only have about 900 KB of available disk space – less than 1 MB – and a tiny 16 MB RAMdisk. He settled on Samba 4.8, which dates back to 2018, the same year Apple discontinued the product line, but which includes the necessary Time Machine support, via a module named vfs_fruit. The TimeCapsuleSMB docs are worth a read. We found his descriptions of how he worked around the hardware's very significant limitations impressive. Notably, on the early models, you'll need to manually reload the software every time you reboot the Time Capsule. The final model can do this automatically. Don't fret at the thought of backing up to such an elderly spinning hard disk: iFixit has descriptions of how to replace the drive in both the early models and the later ones too. ®
Categories: Linux fréttir

Humanoid Robot Becomes Buddhist Monk In South Korea

Slashdot - Sat, 2026-05-09 11:00
A four-foot humanoid robot named Gabi has become a monk at a Buddhist temple in Seoul, participating in a modified initiation ceremony where it pledged to respect life, obey humans, act peacefully toward other robots and objects. "Robots are destined to collaborate with humans in every field in the future," Hong Min-suk, a manager at the Jogye Order, the largest sect of Buddhism in South Korea, tells the New York Times. "It will only be natural for them to be part of our festival." Smithsonian Magazine reports: For the temple, this marks the first time a robot has participated in the sugye initiation ceremony, when followers pledge their devotion to the Buddha and his teachings. Gabi -- a Buddhist name that refers to mercy, Yonhap News Agency reports -- was made by Unitree Robotics, a Chinese civilian robotics company. The model, G1, retails starting at $13,500. During the ceremony, Gabi agreed to five vows usually recited by human monks and slightly altered for the humanoid. The robot pledged to respect life, act with peace toward other robots and objects, listen to humans, refrain from acting or speaking in a deceptive manner and save energy. Gabi participated in a modified yeonbi purification ritual. While a human monk normally receives a small incense burn on the arm, instead Gabi received a lotus lantern festival sticker and a prayer bead necklace. The landmark event aligns with the promise made during a New Year's address by the Venerable Jinwoo, president of the Jogye Order of Korean Buddhism, to incorporate artificial intelligence into the Buddhist tradition. "We aim to fearlessly lead the A.I. era and redirect its achievements toward the path of attaining peace of mind and enlightenment," he said, per a statement.

Read more of this story at Slashdot.

Categories: Linux fréttir

London’s BT Tower to get rooftop swimming pool

TheRegister - Sat, 2026-05-09 10:03
Visitors to London’s iconic Telecom Tower might soon be able to go for a rooftop swim, according to plans revealed by the developer turning the building into a hotel. The iconic 177 meter (581 ft) high structure in Fitzrovia in London’s West End was sold off by BT Group in 2024 to US-based hotel owner-operator MCR Hotels for £275 million ($346 million). At the time, the firm said it wanted to preserve the Grade II listed building, while converting it into a hostelry. Now, MCR has announced a small number of public consultation events it is holding on May 11, 12, and 16 where those interested can view the emerging proposals for the site, meet the project team, and share any feedback on the plans. Those proposals include public access to the top of the tower and its podium buildings for the first time in almost half a century. The 34th floor was famously home to a revolving restaurant that gave diners a panoramic view of Britain’s capital as it slowly turned once every 22 mins, but this was closed in 1980. Also part of the proposals are a new publicly accessible square plus retail shops and restaurants at ground level, and a rooftop swimming pool. London is home to a number of high-rise swimming venues already. There is the vertigo-inducing Sky Pool which spans two apartment buildings ten stories up at the Embassy Gardens development in the Nine Elms region of Wandsworth. You will find an infinity pool at the Shangri-La hotel on the 52nd-floor of the Shard building near London Bridge, and there is also a pool on the roof of the Berkeley Hotel, overlooking Knightsbridge. The BT Tower was originally known as the Post Office Tower when it was first built in 1964, and its main purpose was to support microwave antennas used to beam telecom signals between London and the rest of the country. The tower will not be turned into a vertical hotel immediately. BT said payment for the site is spread over six years to 2030, during which time the company will gradually remove all of its telecoms equipment from the building. As we reported previously, the BT Tower also famously fell victim to a giant kitten in an episode of the British 1970s TV comedy series The Goodies. ®
Categories: Linux fréttir

UK wants fresh fingerprints on £300M biometrics platform

TheRegister - Sat, 2026-05-09 08:30
The UK Home Office wants to talk to suppliers about its plans for two potential procurements for the Strategic Central and Bureau Platform (SCBP), its core biometrics system, worth up to £300 million. The department said the procurements could cover support, development, and ongoing modernization of SCBP after it shifted much of the platform to "more modern and widely adopted technology stacks." It said this could allow a broader range of suppliers to undertake support and development work, and split up the work ("potential disaggregation"), according to a preliminary market engagement notice. The notice quotes a total estimated value for the contracts of £296 million including VAT over up to 11 years from October 2027, although it adds that this is based on current annual charges – suggesting these are around £27 million – and should be seen as indicative. The Home Office is holding an event with TechUK on May 15 to start the discussion, with participants required to sign a non-disclosure agreement first. SCBP is part of the long-running Home Office Biometrics (HOB) program to bring together the government's collections of fingerprints, DNA profiles, and facial images. SCBP provides the core components of the Immigration and Asylum Biometrics System (IABS) used for passports, immigration and borders, and the corresponding Ident1 service used by law enforcement. The department's most recent assessment of the HOB program in December 2024 referred to a cost increase of £47.8 million, including £34 million of this covering Ident1 modernization "to deal with urgent obsolescence issues and security vulnerabilities" and £4.4 million for an upgrade to support Livescan, through which police officers collect fingerprints and facial images following arrests. The assessment said the overall cost of the HOB program from 2014-15 to 2034-35 then stood at £1.55 billion. According to Home Office permanent secretary Matthew Rycroft, benefits include searching crime marks (such as fingerprints left at crime scenes) against immigration databases, the police's mobile fingerprint identification service, and the ability to collaborate with other countries. ®
Categories: Linux fréttir

Fiber Optic Cables Can Eavesdrop On Nearby Conversations

Slashdot - Sat, 2026-05-09 07:00
sciencehabit shares a report from Science Magazine: Cold War spies planted bugs in walls, lamps, and telephones. Now, scientists warn, the cables themselves could listen in. A fiber optic technique used to detect earthquakes can also pick up the faint vibrations of nearby speech, researchers reported this week here at the general assembly of the European Geosciences Union. Freely available artificial intelligence (AI) software turned the fiber optic data into intelligible, real-time transcripts. "Not many people realize that [fiber optic cables] can detect acoustic waves," says Jack Lee Smith, a geophysicist at the University of Edinburgh who presented the result. "We show that in almost every case where you use these fibers, this could be a privacy concern." Fiber optics can pick up on sound through a technique called distributed acoustic sensing (DAS). Using a machine called an interrogator, researchers fire laser pulses down a cable and record the pattern of reflections coming back from tiny glass defects along the length of the fiber optic. When an earthquake's seismic wave crosses a section of the fiber, it stretches and squeezes the defects, leading to shifts in the reflected light that researchers can use to build a picture of an earthquake. DAS essentially turns a fiber cable into a long chain of seismometers that can detect not only earthquakes, but also the rumblings of volcanoes, cars, and college marching bands. And although scientists set up dedicated fiber lines specifically for research, DAS can also be performed on "dark fiber" -- unused strands in the web of fiber optics that runs through cities and across oceans, carrying the world's internet traffic. DAS can also be used to eavesdrop, the work of Smith and his colleagues shows. They conducted a field test using an existing DAS setup used to study coastal erosion. They set a speaker next to the cable and played pure tones, music, and speech. Human speech contains frequencies ranging from a few hundred to several thousand hertz. The low end of the range could be pulled out of the data "even without any preprocessing," Smith says. "You can easily see acoustic waves." Getting higher frequency speech took a bit of postprocessing, but it was possible. Dumping the data directly into Whisper, a free AI transcription tool, provided accurate real-time transcription. However, this technique worked only for coiled cables, exposed at the surface, at distances of up to 5 meters from the speaker. Burying the cable under just 20 centimeters of dirt was enough to muddy the speech. And straight cables -- even exposed ones right next to the speaker -- did not record speech well.

Read more of this story at Slashdot.

Categories: Linux fréttir

NASA Keeps Track As Mexico City Sinks Into the Ground

Slashdot - Sat, 2026-05-09 03:30
An anonymous reader quotes a report from the Guardian: Walking into Mexico City's sprawling central Zocalo is a dizzying experience. At one end of the plaza, the capital's cathedral, with its soaring spires, slumps in one direction. An attached church, known as the Metropolitan Sanctuary, tilts in the other. The nearby National Palace also seems off-kilter. The teetering of many of the capital's historic buildings is the most visible sign of a phenomenon that has been ongoing for more than a century: Mexico City is sinking at an alarming rate. Now, the metropolis's descent is being tracked in real time thanks to one of the most powerful radar systems ever launched into space. Known as Nisar, the satellite can detect minute changes in Earth's surface, even through thick vegetation or cloud cover. "Nisar takes radar imaging observations of Earth to the next level," said Marin Govorcin, a scientist at Nasa's jet propulsion laboratory. "Nisar will see any change big or small that happens on Earth from week to week. No other imaging mission can claim this." Though not the first time that Mexico City's sinking has been observed from space, the Nisar mission has provided a greater sense of how far the sinking spreads and how it changes across different types of land than any other space-based sensor. It has also been able to penetrate areas on the outskirts of the city that were previously challenging to study because of the complex terrain. The implications of the imagery extend far beyond the Mexican capital. "This study of Mexico City speaks to the realm of possibilities that will open up thanks to the Nisar system," said Dario Solano-Rojas, an engineer at the National Autonomous University of Mexico (Unam). "And not just for sinking cities but also for studying volcanoes, for studying the deformation associated with earthquakes, for studying landslides." According to Nasa, the technology is also capable of monitoring the climate crisis, glacier sliding, agricultural productivity, soil moisture, forestry, coastal flooding and more. The Nisar system found that some parts of the city are dropping by more than 2cm a month. "First documented in 1925, the city's sinking is a result of centuries of exploitation of the groundwater," the report says. "Because Mexico City and its surrounds were built on an ancient lake bed, the soil beneath the city is extremely soft. When water is pumped out of the aquifer below, this clay-like earth compacts, resulting in a city that is quietly sinking." The crisis is also self-reinforcing: as the city sinks, aging pipes crack and leak, causing Mexico City to lose an estimated 40% of its water, even as drought and climate change make supplies more fragile.

Read more of this story at Slashdot.

Categories: Linux fréttir

Akamai surges on big LLM deal as Cloudflare dims

TheRegister - Fri, 2026-05-08 23:17
This week was the best of times for Akamai and the worst of times for Cloudflare. On the same evening, content delivery network mainstay Cloudflare announced it was cutting about a fifth of its staff in a realignment around AI, its competitor Akamai announced a seven-year, $1.8 billion deal with a leading LLM provider that Bloomberg identified as Anthropic. Akamai CEO Tom Leighton said this was the largest deal in the company’s history and that it came after another large, unidentified frontier-model developer signed a $200 million deal last quarter. “These leaders in AI have chosen Akamai because their AI workloads need the scale, performance and reliability that our cloud platform provides,” he said during the company’s first quarter earnings call on Thursday. Akamai, which has 4,300 locations in 700 cities across 130 countries, won the deal against stiff competition from hyperscalers and neoclouds. He said Akamai’s ability to manage and scale complex distributed systems, as well as its low latency, tipped the scales in its favor. Given the supply chain constraints in datacenter space, especially as it relates to memory costs and the infrastructure needed inside of large datacenter buildouts, one analyst asked if Akamai planned any increase to its capital expenditures this year to pay for it. Akamai executive vice president and CFO Ed McGowan said that was not likely. “We’ve been able to get the supply chain ready. We anticipate receiving all the goods that we need to deliver this services over the next seven years within the next 12 months,” he said. “Now there’s always potential for slippage and delays, but we have mechanisms in our contracts to deal with, if, in say six months from now, prices were to go up. So we’ve taken that into consideration.” McGowan said it is a consumption-based contract over seven years, so as soon as Akamai ramps the necessary capacity, it will start taking revenue, which he expects to begin happening later this year. Winning this deal and ones like it has been Akamai’s goal in the AI era, Leighton said. “This has been the strategy all along. So we’re very pleased to be executing against it,” he said. “The goal has been to be deploying a distributed inference platform, distributed compute platform that would be desired by enterprises across the spectrum … The platform is to a point where we can do that, and I think you'll see more of this going forward.” On the same day, across the country, Cloudflare was spelling out the bad news to its employees that it planned to cut the workforce by 1,100, roughly 20 percent. Cloudflare co-founders Matthew Prince and Michelle Zatlyn said it was not about cutting costs, but about building a company that meets the AI moment. “We have to be intentional in how we architect our company for the agentic AI era in order to supercharge the value we deliver to our customers and to honor our mission to help build a better Internet for everyone, everywhere,” they wrote in a blog post. Cloudflare’s revenues grew 34 percent year over year to reach $639.8 million in the first quarter. It posted a net loss of $22.9 million. It expects to pay up to $150 million in severance and benefit payments related to the layoffs. While Akamai’s stock price surged 26 percent on Friday, Cloudflare dropped 23 percent. With a market cap of over $69 billion, Cloudflare still has more than three times Akamai’s market cap. ®
Categories: Linux fréttir

Does Fidelity's Reorganization Signal the Beginning of the End for 'Small-Team Agile'?

Slashdot - Fri, 2026-05-08 23:00
Longtime Slashdot reader cellocgw writes: Hiding inside another layoff report, Fidelity is reorganizing: "The changes are aimed at moving the teams away from an 'agile' makeup -- comprising smaller, siloed squads -- and toward larger teams built to move faster on projects." OMG, as they say: "Sudden outbreak of common sense." According to the Boston Globe, Fidelity is cutting about 1,000 jobs even as it plans to hire roughly 5,300 new workers, many of them early-career engineers. Half of the 3,300 new workers hired this year "will be in tech or product-related roles," the report says, noting that "about 2,000 of those jobs are currently open, and 400 of them are in tech/product-delivery." "The company also plans to add almost 2,000 new early-career workers, with the goal of making the tech and product-delivery teams more hands-on. In all, that means roughly 5,300 new jobs in the pipeline for Fidelity." The company says AI isn't driving the shift; as cellocgw noted, it's about moving toward larger teams that Fidelity says can move faster on priority projects. The financial services firm also reported a strong 2025 under CEO Abigail Johnson, with managed assets rising 19% from 2024 to $7.1 trillion and revenue climbing 15% to $37.7 billion. "Throughout the company's history, our investments in technology have fueled our growth and customer service capabilities," Johnson wrote in a letter (PDF) included in the company's annual report. "We will continue to prioritize technology initiatives that help us advance digital capabilities, simplify our technology ecosystem, and protect the firm and our customers."

Read more of this story at Slashdot.

Categories: Linux fréttir

Micron Ships Gigantic 245TB SSD

Slashdot - Fri, 2026-05-08 22:00
BrianFagioli writes: Micron says it is now shipping the world's highest-capacity commercially available SSD, and the numbers are honestly hard to wrap your head around. The new Micron 6600 ION packs 245TB into a single drive and is aimed squarely at AI infrastructure, hyperscalers, and cloud providers dealing with exploding data growth. According to the company, the SSD can reduce rack counts by 82 percent compared to HDD deployments offering similar raw capacity, while also cutting power usage and cooling requirements. Micron says the drive tops out at roughly 30W, which it claims is about half the power draw of comparable hard drive setups. The announcement also feels like another warning sign for spinning disks in the enterprise. Hard drives still dominate bulk storage because of lower cost per terabyte, but SSD capacities keep climbing into territory that used to belong exclusively to HDDs. Micron is also touting major performance gains, claiming up to 84 times better energy efficiency for AI workloads and dramatically lower latency versus HDD-based systems. While nobody is dropping one of these into a home NAS anytime soon, the idea of a quarter petabyte on a single SSD no longer sounds like science fiction.

Read more of this story at Slashdot.

Categories: Linux fréttir

Pages

Subscribe to www.netserv.is aggregator - Linux fréttir