news aggregator
An anonymous reader quotes a report from the BBC: Experts at the University of Edinburgh carried out a post-mortem brain examination on 25 cats which had symptoms of dementia in life, including confusion, sleep disruption and an increase in vocalization. They found a build-up of amyloid-beta, a toxic protein and one of the defining features of Alzheimer's disease. The discovery has been hailed as a "perfect natural model for Alzheimer's" by scientists who believe it will help them explore new treatments for humans.
Dr Robert McGeachan, study lead from the University of Edinburgh's Royal (Dick) School of Veterinary Studies, said: "Dementia is a devastating disease -- whether it affects humans, cats, or dogs. Our findings highlight the striking similarities between feline dementia and Alzheimer's disease in people. This opens the door to exploring whether promising new treatments for human Alzheimer's disease could also help our ageing pets." [...]
Previously, researchers have studied genetically-modified rodents, although the species does not naturally suffer from dementia. "Because cats naturally develop these brain changes, they may also offer a more accurate model of the disease than traditional laboratory animals, ultimately benefiting both species and their caregivers," Dr McGeachan said. [...] Prof Danielle Gunn-Moore, an expert in feline medicine at the vet school, said the discovery could also help to understand and manage feline dementia. The findings have been published in the European Journal of Neuroscience.
Read more of this story at Slashdot.
For now at least, even though government buying can improve, open source is not all it's cracked up to be
Debate Not for the first time, Microsoft is in the spotlight for the UK government's money it voraciously consumes – apparently £1.9 billion a year in software licensing, and roughly £9 billion over five years. Not surprisingly, there are plenty of voices challenging whether this is good use of public money. After all, aren't there plenty of open source alternatives?…
Foundation warns federated servers face biggest risk, but single-instance users can take their time
The maintainers of the federated secure chat protocol Matrix are warning users of a pair of "high severity protocol vulnerabilities," addressed in the latest version, saying patching them requires a breaking change in servers and clients.…
You guessed it: looks like it's a so-called AI
People are noticing Firefox gobbling extra CPU and electricity, apparently caused by an "inference engine" built into recent versions of Firefox. Don't say El Reg didn't try to warn you.…
An encounter with the healthcare system reveals sickening decisions about data
Column We already live in a world where pretty much every public act - online or in the real world - leaves a mark in a database somewhere. But how far back does that record extend? I recently learned that record goes back further than I'd seriously imagined.…
United Launch Alliance's Vulcan Centaur rocket successfully completed its first-ever national security mission, launching the U.S. military's first experimental navigation satellite in 48 years. Space.com reports: The mission saw the company's powerful new Vulcan Centaur rocket take off from Space Launch Complex 41 (SLC-41) at Cape Canaveral Space Force Station in Florida. Vulcan launched with four side-mounted solid rocket boosters in order to generate enough thrust to send its payload directly into geosynchronous orbit on one of ULA's longest flights ever, a seven-hour journey that will span over 22,000 miles (35,000 kilometers), according to ULA.
The payload launching on Tuesday's mission was the U.S. military's first experimental navigation satellite to be launched in 48 years. It is what's known as a position, navigation and timing (PNT) satellite, a type of spacecraft that provides data similar to that of the well-known GPS system. This satellite will be testing many experimental new technologies that are designed to make it resilient to jamming and spoofing, according to Andrew Builta with L3Harris Technologies, the prime contractor for the PNT payload integrated onto a satellite bus built by Northrop Grumman.
The satellite, identified publicly only as Navigation Technology Satellite-3 (NTS-3), features a phased array antenna that allows it to "focus powerful beams to ground forces and combat jamming environments," Builta said in a media roundtable on Monday (Aug. 11). GPS jamming has become an increasingly worrisome problem for both the U.S. military and commercial satellite operators, which is why this spacecraft will be conducting experiments to test how effective these new technologies are at circumventing jamming attacks. In addition, the satellite features a software architecture that allows it to be reprogrammed while in orbit. "This is a truly game-changing capability," Builta said.
Read more of this story at Slashdot.
Agency asks for ideas from US industry as orbit decays
NASA is seeking solutions for a way to raise the orbit of the Neil Gehrels Swift Observatory despite the spacecraft being marked for termination after FY2026 under the agency's budget proposal.…
Minnesota’s capital is the latest to feature on Interlock’s leak blog after late-July cyberattack
The Interlock ransomware gang has flaunted a 43GB haul of files allegedly stolen from the city of Saint Paul, following a late-July cyberattack that forced the Minnesota capital to declare a state of national emergency.…
First came the fireball, then a hole in the roof and a dent in the floor
In late June media speculated that a meteor entering Earth’s atmosphere caused widespread sightings of a celestial fireball during daylight hours across the southeast USA. Scientists have now confirmed space rocks caused the phenomenon, citing as evidence a meteorite they found in a resident’s living room.…
Federal Court finds Big Tech players abused their market power
Australia’s Federal Court has given Epic Games another win in its global fight against the way Apple and Google run their app stores.…
An anonymous reader quotes a report from ZDNet: You can't say Linux creator Linus Torvalds didn't give the kernel developers fair warning. He'd told them: "The upcoming merge window for 6.17 is going to be slightly chaotic for me. I have multiple family events this August (a wedding and a big birthday), and with said family being spread not only across the US, but in Finland too, I'm spending about half the month traveling." Therefore, Torvalds continued, "That does not mean I'll be more lenient to late pull requests (probably quite the reverse, since it's just going to add to the potential chaos)." So, when Meta software engineer Palmer Dabbelt pushed through a set of RISC-V patches and admitted "this is very late," he knew he was playing with fire. He just didn't know how badly he'd be burned.
Torvalds fired back on the Linux Kernel Mailing List (LKML): "This is garbage and it came in too late. I asked for early pull requests because I'm traveling, and if you can't follow that rule, at least make the pull requests good." It went downhill from there. Torvalds continued: "This adds various garbage that isn't RISC-V specific to generic header files. And by 'garbage," I really mean it. This is stuff that nobody should ever send me, never mind late in a merge window." Specifically, Torvalds hated the "crazy and pointless" way in which one of the patch's helper functions combined two unsigned 16-bit integers into a 32-bit integer. How bad was it? "That thing makes the world actively a worse place to live. It's useless garbage that makes any user incomprehensible, and actively *WORSE* than not using that stupid 'helper.'"
In addition to the quality issues, Torvalds was annoyed that the offending code was added to generic header files rather than the RISC-V tree. He emphasized that such generic changes could negatively impact the broader Linux community, writing: "You just made things WORSE, and you added that 'helper' to a generic non-RISC-V file where people are apparently supposed to use it to make other code worse too... So no. Things like this need to get bent. It does not go into generic header files, and it damn well does not happen late in the merge window. You're on notice: no more late pull requests, and no more garbage outside the RISC-V tree." [...] Dabbelt gets it. He replied, "OK, sorry. I've been dropping the ball lately, and it kind of piled up, taking a bunch of stuff late, but that just leads to me making mistakes. So I'll stop being late, and hopefully that helps with the quality issues."
Read more of this story at Slashdot.
Tells court 'What I did was wrong and I want to apologize for my conduct'
Terraform Labs founder Do Kwon has pled guilty to committing fraud when promoting the so-called "stablecoin" Terra USD and now faces time in jail.…
Cornell University researchers have developed an "invisible" light-based watermarking system that embeds unique codes into the physical light that illuminates the subject during recording, allowing any camera to capture authentication data without special hardware. By comparing these coded light patterns against recorded footage, analysts can spot deepfake manipulations, offering a more resilient verification method than traditional file-based watermarks. TechSpot reports: Programmable light sources such as computer monitors, studio lighting, or certain LED fixtures can be embedded with coded brightness patterns using software alone. Standard non-programmable lamps can be adapted by fitting them with a compact chip -- roughly the size of a postage stamp -- that subtly fluctuates light intensity according to a secret code. The embedded code consists of tiny variations in lighting frequency and brightness that are imperceptible to the naked eye. Michael explained that these fluctuations are designed based on human visual perception research. Each light's unique code effectively produces a low-resolution, time-stamped record of the scene under slightly different lighting conditions. [Abe Davis, an assistant professor] refers to these as code videos.
"When someone manipulates a video, the manipulated parts start to contradict what we see in these code videos," Davis said. "And if someone tries to generate fake video with AI, the resulting code videos just look like random variations." By comparing the coded patterns against the suspect footage, analysts can detect missing sequences, inserted objects, or altered scenes. For example, content removed from an interview would appear as visual gaps in the recovered code video, while fabricated elements would often show up as solid black areas. The researchers have demonstrated the use of up to three independent lighting codes within the same scene. This layering increases the complexity of the watermark and raises the difficulty for potential forgers, who would have to replicate multiple synchronized code videos that all match the visible footage. The concept is called noise-coded illumination and was presented on August 10 at SIGGRAPH 2025 in Vancouver, British Columbia.
Read more of this story at Slashdot.
Terraform Labs founder Do Kwon pleaded guilty in U.S. federal court to conspiracy to defraud and wire fraud over the $40 billion collapse of TerraUSD and Luna in 2022. Reuters reports: Kwon, 33, who co-founded Singapore-based Terraform Labs and developed the TerraUSD and Luna currencies, entered the plea at a court hearing in New York before U.S. District Judge Paul Engelmayer. He had pleaded not guilty in January to a nine-count indictment charging him with securities fraud, wire fraud, commodities fraud and money laundering conspiracy.
Accused of misleading investors in 2021 about TerraUSD - a so-called stablecoin designed to maintain a value of $1 - Kwon pleaded guilty to the two counts under an agreement with the Manhattan U.S. Attorney's office, which brought the charges. He faces up to 25 years in prison when Engelmayer sentences him on December 11, though prosecutor Kimberly Ravener said the government had agreed to advocate for a prison term of no more than 12 years provided he accepts responsibility for his crimes. "I made false and misleading statements about why it regained its peg by failing to disclose a trading firm's role in restoring that peg," Kwon said in court. "What I did was wrong."
Read more of this story at Slashdot.
ole_timer shares a report from the New York Times: Investigators have uncovered evidence that Russia is at least partly responsible for a recent hack of the computer system that manages federal court documents, including highly sensitive records with information that could reveal sources and people charged with national security crimes, according to several people briefed on the breach. It is not clear what entity is responsible, whether an arm of Russian intelligence might be behind the intrusion or if other countries were also involved, which some of the people familiar with the matter described as a yearslong effort to infiltrate the system. Some of the searches included midlevel criminal cases in the New York City area and several other jurisdictions, with some cases involving people with Russian and Eastern European surnames.
Administrators with the court system recently informed Justice Department officials, clerks and chief judges in federal courts that "persistent and sophisticated cyber threat actors have recently compromised sealed records," according to an internal department memo reviewed by The New York Times. The administrators also advised those officials to quickly remove the most sensitive documents from the system. "This remains an URGENT MATTER that requires immediate action," officials wrote, referring to guidance that the Justice Department had issued in early 2021 after the system was first infiltrated. Documents related to criminal activity with an overseas tie, across at least eight district courts, were initially believed to have been targeted. Last month, the chief judges of district courts across the country were quietly warned to move those kinds of cases off the regular document-management system, according to officials briefed on the request. They were initially told not to discuss the matter with other judges in their districts.
Read more of this story at Slashdot.
None under active exploit…yet
Microsoft’s August Patch Tuesday flaw-fixing festival addresses 111 problems in its products, a dozen of which are deemed critical, and one moderate-severity flaw that is listed as being publicly known.…
An anonymous reader quotes a report from NPR: Boston Public Library, one of the oldest and largest public library systems in the country, is launching a project this summer with OpenAI and Harvard Law School to make its trove of historically significant government documents more accessible to the public. The documents date back to the early 1800s and include oral histories, congressional reports and surveys of different industries and communities. "It really is an incredible repository of primary source materials covering the whole history of the United States as it has been expressed through government publications," said Jessica Chapel, the Boston Public Library's chief of digital and online services. Currently, members of the public who want to access these documents must show up in person. The project will enhance the metadata of each document and will enable users to search and cross-reference entire texts from anywhere in the world. Chapel said Boston Public Library plans to digitize 5,000 documents by the end of the year, and if all goes well, grow the project from there. Because of this historic collection's massive size and fragility, getting to this goal is a daunting process. Every item has to be run through a scanner by hand. It takes about an hour to do 300-400 pages.
Harvard University said it could help. Researchers at the Harvard Law School Library's Institutional Data Initiative are working with libraries, museums and archives on a number of fronts, including training new AI models to help libraries enhance the searchability of their collections. AI companies help fund these efforts, and in return get to train their large language models on high-quality materials that are out of copyright and therefore less likely to lead to lawsuits. "Having information institutions like libraries involved in building a sustainable data ecosystem for AI is critical, because it not just improves the amount of data we have available, it improves the quality of the data and our understanding of what's in it," said Burton Davis, vice president of Microsoft's intellectual property group. [...] OpenAI is helping Boston Public Library cover such costs as scanning and project management. The tech company does not have exclusive rights to the digitized data.
Read more of this story at Slashdot.
IBM and Google report they will build industrial-scale quantum computers containing one million or more qubits by 2030, following IBM's June publication of a quantum computer blueprint addressing previous design gaps and Google's late-2023 breakthrough in scaling error correction.
Current experimental systems contain fewer than 200 qubits. IBM encountered crosstalk interference when scaling its Condor chip to 433 qubits and subsequently adopted low-density parity-check code requiring 90% fewer qubits than Google's surface code method, though this requires longer connections between distant qubits.
Google plans to reduce component costs tenfold to achieve its $1 billion target price for a full-scale machine. Amazon Web Services quantum hardware executive Oskar Painter told FT he estimates useful quantum computers remain 15-30 years away, citing engineering challenges in scaling despite resolved fundamental physics problems.
Read more of this story at Slashdot.
spatwei shares a report from SC Media: Just as it had at BSides Las Vegas earlier in the week, the risks of artificial intelligence dominated the Black Hat USA 2025 security conference on Aug. 6 and 7. We couldn't see all the AI-related talks, but we did catch three of the most promising ones, plus an off-site panel discussion about AI presented by 1Password. The upshot: Large language models and AI agents are far too easy to successfully attack, and many of the security lessons of the past 25 years have been forgotten in the current rush to develop, use and profit from AI.
We -- not just the cybersecurity industry, but any organization bringing AI into its processes -- need to understand the risks of AI and develop ways to mitigate them before we fall victim to the same sorts of vulnerabilities we faced when Bill Clinton was president. "AI agents are like a toddler. You have to follow them around and make sure they don't do dumb things," said Wendy Nather, senior research initiatives director at 1Password and a well-respected cybersecurity veteran. "We're also getting a whole new crop of people coming in and making the same dumb mistakes we made years ago." Her fellow panelist Joseph Carson, chief security evangelist and advisory CISO at Segura, had an appropriately retro analogy for the benefits of using AI. "It's like getting the mushroom in Super Mario Kart," he said. "It makes you go faster, but it doesn't make you a better driver." Many of the AI security flaws resemble early web-era SQL injection risks. "Why are all these old vulnerabilities surfacing again? Because the GenAI space is full of security bad practices," said Nathan Hamiel, senior director of research and lead prototyping engineer at Kudelski Security. "When you deploy these tools, you increase your attack surface. You're creating vulnerabilities where there weren't any."
"Generative AI is over-scoped. The same AI that answers questions about Shakespeare is helping you develop code. This over-generalization leads you to an increased attack surface." He added: "Don't treat AI agents as highly sophisticated, super-intelligent systems. Treat them like drunk robots."
Read more of this story at Slashdot.
Meta's Threads has surpassed 400 million monthly active users, adding 50 million in the last quarter and closing the gap with rival X in mobile daily usage. "As of a few weeks ago [there are] more than 400 million people active on Threads every month," said Instagram head Adam Mosseri. "It's been quite the ride over the last two years. This started as a zany idea to compete with Twitter, and has evolved into a meaningful platform that fosters the open exchange of perspectives. I'm grateful to all of you for making this place what it is today. There's so much work to do from our side, more to come." TechCrunch reports: X, meanwhile, has north of 600 million monthly active users, according to previous statements made by its former CEO, Linda Yaccarino. Recent data from market intelligence provider Similarweb showed that Threads is nearing X's daily app users on mobile devices. In June 2025, Threads' mobile app for iOS and Android saw 115.1 million daily active users, marking a 127.8% increase compared to the previous year. On the other hand, X reached 132 million daily active users, reflecting a 15.2% year-over-year decline.
However, Similarweb found that X's worldwide daily web visits are well ahead of Threads, as the [...] social network saw 145.8 million average daily web visits worldwide in June, while Threads had just 6.9 million.
Read more of this story at Slashdot.
Pages
|