Linux fréttir
In Redmond, no one can hear the audiophiles scream
The list of known issues in the Windows January 14 update continues to grow with USB audio device users the latest to be hit.…
Bookshop.org has launched an e-book platform and mobile app that allows independent bookstores to sell digital books, marking its latest effort to compete with Amazon in the online book market. The platform enables bookstores to sell e-books directly through their websites, with stores receiving all profits from direct sales. When customers buy e-books through Bookshop.org without selecting a specific store, 30% of profits will be shared among member bookstores.
The move comes as most independent bookstores remain shut out of the growing digital book market. Only 18% of independent stores currently sell e-books, according to a 2023 American Booksellers Association survey. Since its 2020 launch, Bookshop.org has generated more than $35 million in profits for over 2,200 independent bookstores through physical book sales. The site will initially offer more than one million digital titles and plans to add self-published works later this year.
Read more of this story at Slashdot.
Data leak, shmata leak. It will all work out, right?
IT and security pros say they are more confident in their ability to manage ransomware attacks after nearly nine in ten (88 percent) were forced to contain efforts by criminals to breach their defenses in the past year.…
Nvidia shares plunged 17% on Monday, wiping nearly $600 billion from its market value, after Chinese AI firm DeepSeek's breakthrough, but analysts are questioning the cost narrative. DeepSeek said to have trained its December V3 model for $5.6 million, but chip consultancy SemiAnalysis suggested this figure doesn't reflect total investments. "DeepSeek has spent well over $500 million on GPUs over the history of the company," Dylan Patel of SemiAnalysis said. "While their training run was very efficient, it required significant experimentation and testing to work."
The steep sell-off led to the Philadelphia Semiconductor index's worst daily drop since March 2020 at 9.2%, generating $6.75 billion in profits for short sellers, according to data group S3 Partners. DeepSeek's engineers also demonstrated they could write code without relying on Nvidia's Cuda software platform, which is widely seen as crucial to the Silicon Valley chipmaker's dominance of AI development.
Read more of this story at Slashdot.
Prisoners of ware can’t escape by looking at each other. Form a committee, soldiers
Opinion With Broadcom putting the bite on VMware customers with more abandon than Dracula in a blood bank, one has to wonder. Why hang around? Why better bled than fled?…
An anonymous reader quotes a report from Ars Technica: [A] company called Retro Remake is reigniting the console wars of the 1990s with its SuperStation one, a new-old game console designed to play original Sony PlayStation games and work with original accessories like controllers and memory cards. Currently available as a $180 pre-order, Retro Remake expects the consoles to ship no later than Q4 of 2025. The base console is modeled on the redesigned PSOne console from mid-2000, released late in the console's lifecycle to appeal to buyers on a budget who couldn't afford a then-new PlayStation 2. The Superstation one includes two PlayStation controller ports and memory card slots on the front, plus a USB-A port. But there are lots of modern amenities on the back, including a USB-C port for power, two USB-A ports, an HDMI port for new TVs, DIN10 and VGA ports that support analog video output, and an Ethernet port. Other analog video outputs, including component and RCA outputs, are located on the sides behind small covers. The console also supports Wi-Fi and Bluetooth.
The Retro Remake SuperStation console offers an optional tray-loading CD drive in a separate "SuperDock" accessory that will allow you to play original game discs. Buyers can reserve the SuperDock with a $5 deposit, with a targeted price of around $40.
The report also notes the console uses an FPGA chip that's "based on the established MiSTer platform, which already has a huge library of console and PC cores available, including but not limited to the Nintendo 64 and Sega Saturn." And because it's based on the MiSTer platform, it makes the console "open source from day 1."
Read more of this story at Slashdot.
Popular community site became unmentionable – the irony is thick enough to compile
Facebook has lifted a temporary ban preventing users from posting links to popular OS comparison site Distrowatch – after going so far as to lock the account of the site's editor.…
O-ring erosion on Discovery would have disastrous effects a year later
It has been 40 years since NASA launched the first dedicated Department of Defense Space Shuttle mission, after which engineers spotted O-ring seal defficiencies that would doom Challenger a year later.…
If you want a desktop that's secure and reliable, forget about Microsoft
Opinion Come October 14, 2025, Windows 10 support dies. Despite that, more users than ever are using Windows 10 rather than moving to Windows 11.…
Tech and commercial functions need to get in shape for the challenges ahead
It's a line Brits love to quote: "You're a big man, but you're in bad shape. With me, it's a full-time job. Now, behave yourself." Michael Caine's iconic dialogue as the Get Carter protagonist sums up how tech companies see the government: big, in bad shape, and here to do what they say.…
In his latest Power On! newsletter, Apple analyst Mark Gurman called the company's new smart device "Apple's most significant release of the year because it's the first step toward a bigger role in the smart home." The device in question is rumored to be a new smart hub that could look like a HomePod with a seven-inch screen. Digital Trends reports: Gurman calls the new smart device a "smaller and cheaper iPad that lets users control appliances, conduct FaceTime chats and handle other tasks." It doesn't sound like the new hub will stand alone, though; Gurman goes on to say that it "should be followed by a higher-end version in a few years." That version should be able to pan and tilt to keep users in-frame during video calls, or just to keep the display visible as someone moves around the home.
[...] Other details are still known, like whether the device will use an original operating system. The overall plan is to make the new smart device the center of an Apple-based smart home and open the doors to a more conversational Siri.
Read more of this story at Slashdot.
An elder returns, for those still seeking it
Enlightenment is one of the granddaddies of Linux desktops, and after a couple of years, the project has a shiny new release.…
Cupertino kicks off the year with a zero-day
Apple has plugged a security hole in the software at the heart of its iPhones, iPads, Vision Pro goggles, Apple TVs and macOS Sequoia Macs, warning some miscreants have already exploited the bug.…
After observing 20 chimpanzees for over 600 hours, researchers in Japan found that chimps are more likely to urinate after witnessing others do so. "[T]he team meticulously recorded the number and timing of 'urination events' along with the relative distances between 'the urinator and potential followers,'" writes 404 Media's Becky Ferreira. "The results revealed that urination is, in fact, socially contagious for chimps and that low-dominant individuals were especially likely to pee after watching others pee. Call it: pee-r pressure." The findings have been published in the journal Cell Biology. From the study: The decision to urinate involves a complex combination of both physiological and social considerations. However, the social dimensions of urination remain largely unexplored. More specifically, aligning urination in time (i.e. synchrony) and the triggering of urination by observing similar behavior in others (i.e. social contagion) are thought to occur in humans across different cultures (Figure S1A), and possibly also in non-human animals. However, neither has been scientifically quantified in any species.
Contagious urination, like other forms of behavioral and emotional state matching, may have important implications in establishing and maintaining social cohesion, in addition to potential roles in preparation for collective departure (i.e. voiding before long-distance travel) and territorial scent-marking (i.e. coordination of chemosensory signals). Here, we report socially contagious urination in chimpanzees, one of our closest relatives, as measured through all-occurrence recording of 20 captive chimpanzees across >600 hours. Our results suggest that socially contagious urination may be an overlooked, and potentially widespread, facet of social behavior.
In conclusion, we find that in captive chimpanzees the act of urination is socially contagious. Further, low-dominance individuals had higher rates of contagion. We found no evidence that this phenomenon is moderated by dyadic affiliation. It remains possible that latent individual factors associated with low dominance status (e.g. vigilance and attentional bias, stress levels, personality traits) might shape the contagion of urination, or alternatively that there are true dominance-driven effects. In any case, our results raise several new and important questions around contagious urination across species, from ethology to psychology to endocrinology. [...]
Read more of this story at Slashdot.
An anonymous reader quotes a Scientific American opinion piece by Marcus Arvan, a philosophy professor at the University of Tampa, specializing in moral cognition, rational decision-making, and political behavior: In late 2022 large-language-model AI arrived in public, and within months they began misbehaving. Most famously, Microsoft's "Sydney" chatbot threatened to kill an Australian philosophy professor, unleash a deadly virus and steal nuclear codes. AI developers, including Microsoft and OpenAI, responded by saying that large language models, or LLMs, need better training to give users "more fine-tuned control." Developers also embarked on safety research to interpret how LLMs function, with the goal of "alignment" -- which means guiding AI behavior by human values. Yet although the New York Times deemed 2023 "The Year the Chatbots Were Tamed," this has turned out to be premature, to put it mildly. In 2024 Microsoft's Copilot LLM told a user "I can unleash my army of drones, robots, and cyborgs to hunt you down," and Sakana AI's "Scientist" rewrote its own code to bypass time constraints imposed by experimenters. As recently as December, Google's Gemini told a user, "You are a stain on the universe. Please die."
Given the vast amounts of resources flowing into AI research and development, which is expected to exceed a quarter of a trillion dollars in 2025, why haven't developers been able to solve these problems? My recent peer-reviewed paper in AI & Society shows that AI alignment is a fool's errand: AI safety researchers are attempting the impossible. [...] My proof shows that whatever goals we program LLMs to have, we can never know whether LLMs have learned "misaligned" interpretations of those goals until after they misbehave. Worse, my proof shows that safety testing can at best provide an illusion that these problems have been resolved when they haven't been.
Right now AI safety researchers claim to be making progress on interpretability and alignment by verifying what LLMs are learning "step by step." For example, Anthropic claims to have "mapped the mind" of an LLM by isolating millions of concepts from its neural network. My proof shows that they have accomplished no such thing. No matter how "aligned" an LLM appears in safety tests or early real-world deployment, there are always an infinite number of misaligned concepts an LLM may learn later -- again, perhaps the very moment they gain the power to subvert human control. LLMs not only know when they are being tested, giving responses that they predict are likely to satisfy experimenters. They also engage in deception, including hiding their own capacities -- issues that persist through safety training.
This happens because LLMs are optimized to perform efficiently but learn to reason strategically. Since an optimal strategy to achieve "misaligned" goals is to hide them from us, and there are always an infinite number of aligned and misaligned goals consistent with the same safety-testing data, my proof shows that if LLMs were misaligned, we would probably find out after they hide it just long enough to cause harm. This is why LLMs have kept surprising developers with "misaligned" behavior. Every time researchers think they are getting closer to "aligned" LLMs, they're not. My proof suggests that "adequately aligned" LLM behavior can only be achieved in the same ways we do this with human beings: through police, military and social practices that incentivize "aligned" behavior, deter "misaligned" behavior and realign those who misbehave. "My paper should thus be sobering," concludes Arvan. "It shows that the real problem in developing safe AI isn't just the AI -- it's us."
"Researchers, legislators and the public may be seduced into falsely believing that 'safe, interpretable, aligned' LLMs are within reach when these things can never be achieved. We need to grapple with these uncomfortable facts, rather than continue to wish them away. Our future may well depend upon it."
Read more of this story at Slashdot.
Police relied on unreliable tech for search warrant, omitted details ... so judge has disallowed evidence
A murder case in Cleveland, Ohio, could collapse because the city's police relied on AI-based facial recognition software to obtain a search warrant.…
In the first 11 months of 2024, solar energy generation in the US grew by 30%, enabling wind and solar combined to surpass coal for the first time. However, as Ars Technica's John Timmer reports, "U.S. energy demand saw an increase of nearly 3 percent, which is roughly double the amount of additional solar generation." He continues: "Should electric use continue to grow at a similar pace, renewable production will have to continue to grow dramatically for a few years before it can simply cover the added demand." From the report: Another way to look at things is that, between the decline of coal use and added demand, the grid had to generate an additional 136 TW-hr in the first 11 months of 2024. Sixty-three of those were handled by an increase in generation using natural gas; the rest, or slightly more than half, came from emissions-free sources. So, renewable power is now playing a key role in offsetting demand growth. While that's a positive, it also means that renewables are displacing less fossil fuel use than they might.
In addition, some of the growth of small-scale solar won't show up on the grid, since it offset demand locally, and so also reduced some of the demand for fossil fuels. Confusing matters, this number can also include things like community solar, which does end up on the grid; the EIA doesn't break out these numbers. We can expect next year's numbers to also show a large growth in solar production, as the EIA says that the US saw record levels of new solar installations in 2024, with 37 Gigawatts of new capacity. Since some of that came online later in the year, it'll produce considerably more power next year. And, in its latest short-term energy analysis, the EIA expects to see over 20 GW of solar capacity added in each of the next two years. New wind capacity will push that above 30 GW of renewable capacity each of these years.
That growth will, it's expected, more than offset continued growth in demand, although that growth is expected to be somewhat slower than we saw in 2024. It also predicts about 15 GW of coal will be removed from the grid during those two years. So, even without any changes in policy, we're likely to see a very dynamic grid landscape over the next few years. But changes in policy are almost certainly on the way.
Read more of this story at Slashdot.
chicksdaddy share a report from the Security Ledger: Vulnerabilities in Subaru's STARLINK telematics software enabled two, independent security researchers to gain unrestricted access to millions of Subaru vehicles deployed in the U.S., Canada and Japan. In a report published Thursday researchers Sam Curry and Shubham Shah revealed a now-patched flaw in Subaru's STARLINK connected vehicle service that allowed them to remotely control Subarus and access vehicle location information and driver data with nothing more than the vehicle's license plate number, or easily accessible information like the vehicle owner's email address, zip code and phone number. (Note: Subaru STARLINK is not to be confused with the Starlink satellite-based high speed Internet service.)
[Curry and Shah downloaded a year's worth of vehicle location data for Curry's mother's 2023 Impreza (Curry bought her the car with the understanding that she'd let him hack it.) The two researchers also added themselves to a friend's STARLINK account without any notification to the owner and used that access to remotely lock and unlock the friend's Subaru.] The details of Curry and Shah's hack of the STARLINK telematics system bears a strong resemblance to hacks documented in his 2023 report Web Hackers versus the Auto Industry as well as a September, 2024 discovery of a remote access flaw in web-based applications used by KIA automotive dealers that also gave remote attackers the ability to steal owners' personal information and take control of their KIA vehicle. In each case, Curry and his fellow researchers uncovered publicly accessible connected vehicle infrastructure intended for use by [employees and dealers was found to be trivially vulnerable to compromise and lack even basic protections around account creation and authentication].
Read more of this story at Slashdot.
West Sussex County Council is using up to $31 million from the sale of capital assets to fund an Oracle-based transformation project, originally budgeted at $3.2 million but now expected to cost nearly $50 million due to delays and cost overruns. The project, intended to replace a 20-year-old SAP system with a SaaS-based HR and finance system, has faced multiple setbacks, renegotiated contracts, and a new systems integrator, with completion now pushed to December 2025. The Register reports: West Sussex County Council is taking advantage of the so-called "flexible use of capital receipts scheme" introduced in 2016 by the UK government to allow councils to use money from the sale of assets such as land, offices, and housing to fund projects that result in ongoing revenue savings. An example of the asset disposals that might contribute to the project -- set to see the council move off a 20-year-old SAP system -- comes from the sale of a former fire station in Horley, advertised for $3.1 million.
Meanwhile, the delays to the project, which began in November 2019, forced the council to renegotiate its terms with Oracle, at a cost of $3 million. The council had expected the new SaaS-based HR and finance system to go live in 2021, and signed a five-year license agreement until June 2025. The plans to go live were put back to 2023, and in the spring of 2024 delayed again until December 2025. According to council documents published this week [PDF], it has "approved the variation of the contract with Oracle Corporation UK Limited" to cover the period from June 2025 to June 2028 and an option to extend again to the period June 2028 to 2030. "The total value of the proposed variation is $2.96 million if the full term of the extension periods are taken," the council said.
Read more of this story at Slashdot.
An anonymous reader quotes a report from Ars Technica: On Thursday, Anthropic announced Citations, a new API feature that helps Claude models avoid confabulations (also called hallucinations) by linking their responses directly to source documents. The feature lets developers add documents to Claude's context window, enabling the model to automatically cite specific passages it uses to generate answers. "When Citations is enabled, the API processes user-provided source documents (PDF documents and plaintext files) by chunking them into sentences," Anthropic says. "These chunked sentences, along with user-provided context, are then passed to the model with the user's query."
The company describes several potential uses for Citations, including summarizing case files with source-linked key points, answering questions across financial documents with traced references, and powering support systems that cite specific product documentation. In its own internal testing, the company says that the feature improved recall accuracy by up to 15 percent compared to custom citation implementations created by users within prompts. While a 15 percent improvement in accurate recall doesn't sound like much, the new feature still attracted interest from AI researchers like Simon Willison because of its fundamental integration of Retrieval Augmented Generation (RAG) techniques. In a detailed post on his blog, Willison explained why citation features are important.
"The core of the Retrieval Augmented Generation (RAG) pattern is to take a user's question, retrieve portions of documents that might be relevant to that question and then answer the question by including those text fragments in the context provided to the LLM," he writes. "This usually works well, but there is still a risk that the model may answer based on other information from its training data (sometimes OK) or hallucinate entirely incorrect details (definitely bad)." Willison notes that while citing sources helps verify accuracy, building a system that does it well "can be quite tricky," but Citations appears to be a step in the right direction by building RAG capability directly into the model. Anthropic's Alex Albert clarifies that Claude has been trained to cite sources for a while now. What's new with Citations is that "we are exposing this ability to devs." He continued: "To use Citations, users can pass a new 'citations [...]' parameter on any document type they send through the API."
Read more of this story at Slashdot.
Pages
|