Linux fréttir
Mysterious Radio Burst Turns Out to Be From a Dead 1967 NASA Satellite
An anonymous reader shared this report from Smithsonian magazine:
Last year, Australian scientists picked up a mysterious burst of radio waves that briefly appeared brighter than all other signals in the sky. Now, the researchers have discovered the blast didn't come from a celestial object, but a defunct satellite orbiting Earth... "We got all excited, thinking maybe we'd discovered a new pulsar or some other object," says Clancy James, a researcher at Australia's Curtin University who is on the Australian Square Kilometer Array Pathfinder (ASKAP) team, to Alex Wilkins at New Scientist. After taking a closer look, however, the team realized that the only viable source for the burst was NASA's dead Relay 2, a short-lived satellite that hasn't been in operation since 1967....
The researchers also discovered that at the time of the event, the satellite was only around 2,800 miles away from Earth, which explains why the signal appeared so strong. The reason behind Relay 2's sudden burst is not clear, but the team has come up with two potential explanations — and neither involves the satellite coming back to life like a zombie. One relates to electrostatic discharge — a build-up of electricity that can result in a sudden blast. Spacecraft get charged with electricity when they pass through plasma, and once enough charge accumulates, it can create a spark. "New spacecraft are built with materials to reduce the build-up of charge, but when Relay 2 was launched, this wasn't well-understood," explains James to Space.com's Robert Lea. The other idea is that a micrometeorite hit the satellite, releasing a small cloud of plasma and radio waves.
Karen Aplin, a space scientist at the University of Bristol in England who was not involved in the study, tells New Scientist that it would be tough to differentiate between signals produced by each of those two scenarios, because they would look very similar. The researchers say they favor the first idea, however, because micrometeorites the size of the one that could have caused the signal are uncommon.
"Their findings were published in a pre-print paper on the arXiv server that has not yet been peer-reviewed."
Read more of this story at Slashdot.
Categories: Linux fréttir
New Linux Kernel Drama: Torvalds Drops Bcachefs Support After Clash
Bcachefs "pitches itself as a filesystem that 'doesn't eat your data'," writes the open source/Linux blog It's FOSS. Although it was last October that Bcachefs developer Kent Overstreet was restricted from participating in the Linux 6.13 kernel development cycle (after ending a mailing list post with "Get your head examined. And get the fuck out of here with this shit.")
And now with the upcoming Linux kernel 6.17 release, Linus Torvalds has decided to drop Bcachefs support, they report, "owing to growing tensions" with Overstreet:
The decision follows a series of disagreements over how fixes and changes for it were submitted during the 6.16 release cycle... Kent filed a pull request to add a new feature called "journal-rewind". It was meant to improve bcachefs repair functionality, but it landed during the release candidate (RC) phase, a time usually reserved for bug fixes, not new features, as Linus pointed out. [Adding "I remain steadfastly convinced that anybody who uses bcachefs is expecting it to be experimental. They had better."]
Theodore Ts'o, a long-time kernel developer and maintainer of ext4, also chimed in, saying that Kent's approach risks introducing regressions, especially when changes affect sensitive parts of a filesystem like journaling. He reminded Kent that the rules around the merge window have been a long-standing consensus in the kernel community, and it's Linus's job to enforce them. After some more back and forth, Kent pushed back, arguing that the rules around the merge window aren't absolute and should allow for flexibility, even more so when user data is at stake. He then went ahead and resubmitted the patch, citing instances from XFS and Btrfs where similar fixes made it into the kernel during RCs. Linus merged it into his tree, but ultimately decided to drop Bcachefs entirely in the 6.17 merge window.
To which Kent responded by clarifying that he wasn't trying to shut Linus out of Bcachefs' decisions, stressing that he values Linus's input...
This of course follows the great Torvalds-Overstreet "filesystem people never learn" throwdown back in April.
Read more of this story at Slashdot.
Categories: Linux fréttir
AI Improves At Improving Itself Using an Evolutionary Trick
Technology writer Matthew Hutson (also Slashdot reader #1,467,653) looks at a new kind of self-improving AI coding system. It rewrites its own code based on empirical evidence of what's helping — as described in a recent preprint on arXiv.
From Hutson's new article in IEEE Spectrum:
A Darwin Gödel Machine (or DGM) starts with a coding agent that can read, write, and execute code, leveraging an LLM for the reading and writing. Then it applies an evolutionary algorithm to create many new agents. In each iteration, the DGM picks one agent from the population and instructs the LLM to create one change to improve the agent's coding ability [by creating "a new, interesting, version of the sampled agent"]. LLMs have something like intuition about what might help, because they're trained on lots of human code. What results is guided evolution, somewhere between random mutation and provably useful enhancement. The DGM then tests the new agent on a coding benchmark, scoring its ability to solve programming challenges...
The researchers ran a DGM for 80 iterations using a coding benchmark called SWE-bench, and ran one for 80 iterations using a benchmark called Polyglot. Agents' scores improved on SWE-bench from 20 percent to 50 percent, and on Polyglot from 14 percent to 31 percent. "We were actually really surprised that the coding agent could write such complicated code by itself," said Jenny Zhang, a computer scientist at the University of British Columbia and the paper's lead author. "It could edit multiple files, create new files, and create really complicated systems."
...
One concern with both evolutionary search and self-improving systems — and especially their combination, as in DGM — is safety. Agents might become uninterpretable or misaligned with human directives. So Zhang and her collaborators added guardrails. They kept the DGMs in sandboxes without access to the Internet or an operating system, and they logged and reviewed all code changes. They suggest that in the future, they could even reward AI for making itself more interpretable and aligned. (In the study, they found that agents falsely reported using certain tools, so they created a DGM that rewarded agents for not making things up, partially alleviating the problem. One agent, however, hacked the method that tracked whether it was making things up.)
As the article puts it, the agents' improvements compounded "as they improved themselves at improving themselves..."
Read more of this story at Slashdot.
Categories: Linux fréttir
People Are Being Committed After Spiraling Into 'ChatGPT Psychosis'
"I don't know what's wrong with me, but something is very bad — I'm very scared, and I need to go to the hospital," a man told his wife, after experiencing what Futurism calls a "ten-day descent into AI-fueled delusion" and "a frightening break with reality."
And a San Francisco psychiatrist tells the site he's seen similar cases in his own clinical practice.
The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot.
"I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do."
Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t."
Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility.
Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts.
"When we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response."
But Futurism reported earlier that "because systems like ChatGPT are designed to encourage and riff on what users say," people experiencing breakdowns "seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions."
In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...."
In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.
Read more of this story at Slashdot.
Categories: Linux fréttir
Pages
